Test Report: KVM_Linux_crio 21833

                    
                      839ba12bf3f470fdbddc75955152cc8402fc5889:2025-11-01:42154
                    
                

Test fail (14/343)

x
+
TestAddons/parallel/Registry (363.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.263618ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:337: TestAddons/parallel/Registry: WARNING: pod list for "kube-system" "actual-registry=true" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:384: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:384: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
addons_test.go:384: TestAddons/parallel/Registry: showing logs for failed pods as of 2025-11-01 08:58:12.39091742 +0000 UTC m=+827.417594666
addons_test.go:384: (dbg) Run:  kubectl --context addons-994396 describe po registry-6b586f9694-b4ph6 -n kube-system
addons_test.go:384: (dbg) kubectl --context addons-994396 describe po registry-6b586f9694-b4ph6 -n kube-system:
Name:             registry-6b586f9694-b4ph6
Namespace:        kube-system
Priority:         0
Service Account:  default
Node:             addons-994396/192.168.39.195
Start Time:       Sat, 01 Nov 2025 08:45:29 +0000
Labels:           actual-registry=true
addonmanager.kubernetes.io/mode=Reconcile
kubernetes.io/minikube-addons=registry
pod-template-hash=6b586f9694
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/registry-6b586f9694
Containers:
registry:
Container ID:   
Image:          docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e
Image ID:       
Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
REGISTRY_STORAGE_DELETE_ENABLED:  true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxmqx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gxmqx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason                           Age                    From               Message
----     ------                           ----                   ----               -------
Normal   Scheduled                        12m                    default-scheduler  Successfully assigned kube-system/registry-6b586f9694-b4ph6 to addons-994396
Warning  Failed                           9m23s (x2 over 11m)    kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           6m54s (x4 over 11m)    kubelet            Error: ErrImagePull
Warning  Failed                           6m54s (x2 over 8m21s)  kubelet            Failed to pull image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e": reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed                           6m15s (x7 over 11m)    kubelet            Error: ImagePullBackOff
Normal   Pulling                          5m23s (x5 over 12m)    kubelet            Pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
Warning  FailedToRetrieveImagePullSecret  2m33s (x21 over 12m)   kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
Normal   BackOff                          78s (x22 over 11m)     kubelet            Back-off pulling image "docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
addons_test.go:384: (dbg) Run:  kubectl --context addons-994396 logs registry-6b586f9694-b4ph6 -n kube-system
addons_test.go:384: (dbg) Non-zero exit: kubectl --context addons-994396 logs registry-6b586f9694-b4ph6 -n kube-system: exit status 1 (77.177268ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "registry" in pod "registry-6b586f9694-b4ph6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:384: kubectl --context addons-994396 logs registry-6b586f9694-b4ph6 -n kube-system: exit status 1
addons_test.go:385: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Registry]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-994396 -n addons-994396
helpers_test.go:252: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 logs -n 25: (1.48628596s)
helpers_test.go:260: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ -p binary-mirror-775538                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ start   │ -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:51 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ enable headlamp -p addons-994396 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:54 UTC │ 01 Nov 25 08:56 UTC │
	│ addons  │ addons-994396 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                         │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:38.415244  535088 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:38.415511  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415520  535088 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:38.415525  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415722  535088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:38.416292  535088 out.go:368] Setting JSON to false
	I1101 08:44:38.417206  535088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62800,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:38.417275  535088 start.go:143] virtualization: kvm guest
	I1101 08:44:38.419180  535088 out.go:179] * [addons-994396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:38.420576  535088 notify.go:221] Checking for updates...
	I1101 08:44:38.420602  535088 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:44:38.422388  535088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:38.423762  535088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:38.425054  535088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:38.426433  535088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:44:38.427613  535088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:44:38.429086  535088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:38.459669  535088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:44:38.460716  535088 start.go:309] selected driver: kvm2
	I1101 08:44:38.460736  535088 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:38.460750  535088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:44:38.461509  535088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:38.461750  535088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:44:38.461788  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:44:38.461839  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:38.461847  535088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:38.461887  535088 start.go:353] cluster config:
	{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:44:38.462012  535088 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:38.463350  535088 out.go:179] * Starting "addons-994396" primary control-plane node in "addons-994396" cluster
	I1101 08:44:38.464523  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:44:38.464559  535088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:38.464570  535088 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:38.464648  535088 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:44:38.464659  535088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:44:38.464982  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:38.465015  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json: {Name:mk89a75531523cc17e10cf65ac144e466baef6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:44:38.465175  535088 start.go:360] acquireMachinesLock for addons-994396: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:44:38.465227  535088 start.go:364] duration metric: took 38.791µs to acquireMachinesLock for "addons-994396"
	I1101 08:44:38.465244  535088 start.go:93] Provisioning new machine with config: &{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:44:38.465309  535088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:44:38.467651  535088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:44:38.467824  535088 start.go:159] libmachine.API.Create for "addons-994396" (driver="kvm2")
	I1101 08:44:38.467852  535088 client.go:173] LocalClient.Create starting
	I1101 08:44:38.467960  535088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 08:44:38.525135  535088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 08:44:38.966403  535088 main.go:143] libmachine: creating domain...
	I1101 08:44:38.966427  535088 main.go:143] libmachine: creating network...
	I1101 08:44:38.968049  535088 main.go:143] libmachine: found existing default network
	I1101 08:44:38.968268  535088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.968754  535088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9b7d0}
	I1101 08:44:38.968919  535088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-994396</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.974811  535088 main.go:143] libmachine: creating private network mk-addons-994396 192.168.39.0/24...
	I1101 08:44:39.051181  535088 main.go:143] libmachine: private network mk-addons-994396 192.168.39.0/24 created
	I1101 08:44:39.051459  535088 main.go:143] libmachine: <network>
	  <name>mk-addons-994396</name>
	  <uuid>960ab3a9-e2ba-413f-8b77-ff4745b036d0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3e:a3:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:39.051486  535088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.051511  535088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:39.051536  535088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.051601  535088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:44:39.334278  535088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa...
	I1101 08:44:39.562590  535088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk...
	I1101 08:44:39.562642  535088 main.go:143] libmachine: Writing magic tar header
	I1101 08:44:39.562674  535088 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:44:39.562773  535088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.562837  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396
	I1101 08:44:39.562920  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 (perms=drwx------)
	I1101 08:44:39.562944  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 08:44:39.562958  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:44:39.562977  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.562988  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 08:44:39.562999  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 08:44:39.563010  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 08:44:39.563022  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:44:39.563032  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:44:39.563043  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:44:39.563053  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:44:39.563063  535088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:44:39.563072  535088 main.go:143] libmachine: skipping /home - not owner
	I1101 08:44:39.563079  535088 main.go:143] libmachine: defining domain...
	I1101 08:44:39.564528  535088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:39.569846  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:73:0a:92 in network default
	I1101 08:44:39.570479  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:39.570497  535088 main.go:143] libmachine: starting domain...
	I1101 08:44:39.570501  535088 main.go:143] libmachine: ensuring networks are active...
	I1101 08:44:39.571361  535088 main.go:143] libmachine: Ensuring network default is active
	I1101 08:44:39.571760  535088 main.go:143] libmachine: Ensuring network mk-addons-994396 is active
	I1101 08:44:39.572463  535088 main.go:143] libmachine: getting domain XML...
	I1101 08:44:39.574016  535088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <uuid>47158355-a959-4cbf-84ea-23a10000597a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2a:d2:e3'/>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:73:0a:92'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:40.850976  535088 main.go:143] libmachine: waiting for domain to start...
	I1101 08:44:40.852401  535088 main.go:143] libmachine: domain is now running
	I1101 08:44:40.852417  535088 main.go:143] libmachine: waiting for IP...
	I1101 08:44:40.853195  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:40.853985  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:40.853994  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:40.854261  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:40.854309  535088 retry.go:31] will retry after 216.262446ms: waiting for domain to come up
	I1101 08:44:41.071837  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.072843  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.072862  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.073274  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.073319  535088 retry.go:31] will retry after 360.302211ms: waiting for domain to come up
	I1101 08:44:41.434879  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.435804  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.435822  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.436172  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.436214  535088 retry.go:31] will retry after 371.777554ms: waiting for domain to come up
	I1101 08:44:41.809947  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.810703  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.810722  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.811072  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.811112  535088 retry.go:31] will retry after 462.843758ms: waiting for domain to come up
	I1101 08:44:42.275984  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.276618  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.276637  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.276993  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.277037  535088 retry.go:31] will retry after 560.265466ms: waiting for domain to come up
	I1101 08:44:42.838931  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.839781  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.839798  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.840224  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.840268  535088 retry.go:31] will retry after 839.411139ms: waiting for domain to come up
	I1101 08:44:43.681040  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:43.681790  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:43.681802  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:43.682192  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:43.682243  535088 retry.go:31] will retry after 1.099878288s: waiting for domain to come up
	I1101 08:44:44.783686  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:44.784502  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:44.784521  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:44.784840  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:44.784888  535088 retry.go:31] will retry after 1.052374717s: waiting for domain to come up
	I1101 08:44:45.839257  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:45.839889  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:45.839926  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:45.840243  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:45.840284  535088 retry.go:31] will retry after 1.704542625s: waiting for domain to come up
	I1101 08:44:47.547411  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:47.548205  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:47.548225  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:47.548588  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:47.548630  535088 retry.go:31] will retry after 1.752267255s: waiting for domain to come up
	I1101 08:44:49.302359  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:49.303199  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:49.303210  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:49.303522  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:49.303559  535088 retry.go:31] will retry after 2.861627149s: waiting for domain to come up
	I1101 08:44:52.168696  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:52.169368  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:52.169385  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:52.169681  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:52.169738  535088 retry.go:31] will retry after 2.277819072s: waiting for domain to come up
	I1101 08:44:54.449193  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:54.449957  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:54.449978  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:54.450273  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:54.450316  535088 retry.go:31] will retry after 3.87405165s: waiting for domain to come up
	I1101 08:44:58.329388  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330073  535088 main.go:143] libmachine: domain addons-994396 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330089  535088 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1101 08:44:58.330096  535088 main.go:143] libmachine: reserving static IP address...
	I1101 08:44:58.330490  535088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-994396", mac: "52:54:00:2a:d2:e3", ip: "192.168.39.195"} in network mk-addons-994396
	I1101 08:44:58.532247  535088 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-994396
	I1101 08:44:58.532270  535088 main.go:143] libmachine: waiting for SSH...
	I1101 08:44:58.532276  535088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:44:58.535646  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536214  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.536242  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536445  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.536737  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.536748  535088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:44:58.655800  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:58.656194  535088 main.go:143] libmachine: domain creation complete
	I1101 08:44:58.657668  535088 machine.go:94] provisionDockerMachine start ...
	I1101 08:44:58.660444  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.660857  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.660881  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.661078  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.661273  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.661283  535088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:44:58.781217  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:44:58.781253  535088 buildroot.go:166] provisioning hostname "addons-994396"
	I1101 08:44:58.784387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784787  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.784821  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784992  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.785186  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.785198  535088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-994396 && echo "addons-994396" | sudo tee /etc/hostname
	I1101 08:44:58.921865  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-994396
	
	I1101 08:44:58.924651  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925106  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.925158  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925363  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.925623  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.925647  535088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-994396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-994396/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-994396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:44:59.053021  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:59.053062  535088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 08:44:59.053121  535088 buildroot.go:174] setting up certificates
	I1101 08:44:59.053134  535088 provision.go:84] configureAuth start
	I1101 08:44:59.056039  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.056491  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.056527  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059390  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059768  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.059793  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059971  535088 provision.go:143] copyHostCerts
	I1101 08:44:59.060039  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 08:44:59.060157  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 08:44:59.060215  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 08:44:59.060262  535088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.addons-994396 san=[127.0.0.1 192.168.39.195 addons-994396 localhost minikube]
	I1101 08:44:59.098818  535088 provision.go:177] copyRemoteCerts
	I1101 08:44:59.098909  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:44:59.101492  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.101853  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.101876  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.102044  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.192919  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:44:59.224374  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:44:59.254587  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:44:59.285112  535088 provision.go:87] duration metric: took 231.963697ms to configureAuth
	I1101 08:44:59.285151  535088 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:44:59.285333  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:44:59.288033  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288440  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.288461  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288660  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.288854  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.288872  535088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:44:59.552498  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:44:59.552535  535088 machine.go:97] duration metric: took 894.848438ms to provisionDockerMachine
	I1101 08:44:59.552551  535088 client.go:176] duration metric: took 21.084691653s to LocalClient.Create
	I1101 08:44:59.552575  535088 start.go:167] duration metric: took 21.084749844s to libmachine.API.Create "addons-994396"
	I1101 08:44:59.552585  535088 start.go:293] postStartSetup for "addons-994396" (driver="kvm2")
	I1101 08:44:59.552598  535088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:44:59.552698  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:44:59.555985  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556410  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.556446  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556594  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.646378  535088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:44:59.651827  535088 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:44:59.651860  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 08:44:59.652002  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 08:44:59.652045  535088 start.go:296] duration metric: took 99.451778ms for postStartSetup
	I1101 08:44:59.655428  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.655951  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.655983  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.656303  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:59.656524  535088 start.go:128] duration metric: took 21.191204758s to createHost
	I1101 08:44:59.659225  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659662  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.659688  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659918  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.660165  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.660179  535088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:44:59.778959  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761986699.744832808
	
	I1101 08:44:59.778992  535088 fix.go:216] guest clock: 1761986699.744832808
	I1101 08:44:59.779003  535088 fix.go:229] Guest: 2025-11-01 08:44:59.744832808 +0000 UTC Remote: 2025-11-01 08:44:59.656538269 +0000 UTC m=+21.291332648 (delta=88.294539ms)
	I1101 08:44:59.779025  535088 fix.go:200] guest clock delta is within tolerance: 88.294539ms
	I1101 08:44:59.779033  535088 start.go:83] releasing machines lock for "addons-994396", held for 21.31379566s
	I1101 08:44:59.782561  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783052  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.783085  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783744  535088 ssh_runner.go:195] Run: cat /version.json
	I1101 08:44:59.783923  535088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:44:59.786949  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787338  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.787364  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787467  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787547  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.788054  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.788100  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.788306  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.898855  535088 ssh_runner.go:195] Run: systemctl --version
	I1101 08:44:59.905749  535088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:45:00.064091  535088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:45:00.072201  535088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:45:00.072263  535088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:45:00.092562  535088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:45:00.092584  535088 start.go:496] detecting cgroup driver to use...
	I1101 08:45:00.092661  535088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:45:00.112010  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:45:00.129164  535088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:45:00.129222  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:45:00.147169  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:45:00.164876  535088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:45:00.317011  535088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:45:00.521291  535088 docker.go:234] disabling docker service ...
	I1101 08:45:00.521377  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:45:00.537927  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:45:00.552544  535088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:45:00.714401  535088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:45:00.855387  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:45:00.871802  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:45:00.895848  535088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:45:00.895969  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.908735  535088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:45:00.908831  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.924244  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.938467  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.951396  535088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:45:00.965054  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.977595  535088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.998868  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:01.011547  535088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:45:01.022709  535088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:45:01.022775  535088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:45:01.044963  535088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:45:01.057499  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:01.203336  535088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:45:01.311792  535088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:45:01.311884  535088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:45:01.317453  535088 start.go:564] Will wait 60s for crictl version
	I1101 08:45:01.317538  535088 ssh_runner.go:195] Run: which crictl
	I1101 08:45:01.321986  535088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:45:01.367266  535088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:45:01.367363  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.398127  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.431424  535088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:45:01.435939  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436441  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:01.436471  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436732  535088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:45:01.441662  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:01.457635  535088 kubeadm.go:884] updating cluster {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:45:01.457753  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:45:01.457802  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:01.495090  535088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:45:01.495193  535088 ssh_runner.go:195] Run: which lz4
	I1101 08:45:01.500348  535088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:45:01.506036  535088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:45:01.506082  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:45:03.083875  535088 crio.go:462] duration metric: took 1.583585669s to copy over tarball
	I1101 08:45:03.084036  535088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:45:04.665932  535088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.581842537s)
	I1101 08:45:04.665965  535088 crio.go:469] duration metric: took 1.582007439s to extract the tarball
	I1101 08:45:04.665976  535088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:45:04.707682  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:04.751036  535088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:45:04.751073  535088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:45:04.751085  535088 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1101 08:45:04.751212  535088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-994396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:45:04.751302  535088 ssh_runner.go:195] Run: crio config
	I1101 08:45:04.801702  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:04.801733  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:04.801758  535088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:45:04.801791  535088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-994396 NodeName:addons-994396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:45:04.801978  535088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-994396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:45:04.802066  535088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:45:04.814571  535088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:45:04.814653  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:45:04.826605  535088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:45:04.846937  535088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:45:04.868213  535088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:45:04.888962  535088 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1101 08:45:04.893299  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:04.908547  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:05.049704  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:05.081089  535088 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396 for IP: 192.168.39.195
	I1101 08:45:05.081124  535088 certs.go:195] generating shared ca certs ...
	I1101 08:45:05.081146  535088 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.081312  535088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 08:45:05.135626  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt ...
	I1101 08:45:05.135669  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt: {Name:mk42d9a91568201fc7bb838317bb109a9d557e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.135920  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key ...
	I1101 08:45:05.135935  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key: {Name:mk8868035ca874da4b6bcd8361c76f97522a09dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.136031  535088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 08:45:05.223112  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt ...
	I1101 08:45:05.223159  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt: {Name:mk17c24c1e5b8188202459729e4a5c2f9a4008a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223343  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key ...
	I1101 08:45:05.223356  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key: {Name:mk64bb220f00b339bafb0b18442258c31c6af7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223432  535088 certs.go:257] generating profile certs ...
	I1101 08:45:05.223509  535088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key
	I1101 08:45:05.223524  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt with IP's: []
	I1101 08:45:05.791770  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt ...
	I1101 08:45:05.791805  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: {Name:mk739df015c10897beee55b57aac6a9687c49aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.791993  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key ...
	I1101 08:45:05.792008  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key: {Name:mk22e303787fbf3b8945b47ac917db338129138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.792086  535088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58
	I1101 08:45:05.792105  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1101 08:45:05.964688  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 ...
	I1101 08:45:05.964721  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58: {Name:mkc85c65639cbe37cb2f18c20238504fe651c568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964892  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 ...
	I1101 08:45:05.964917  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58: {Name:mk0a07f1288d6c9ced8ef2d4bb53cbfce6f3c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964998  535088 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt
	I1101 08:45:05.965075  535088 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key
	I1101 08:45:05.965124  535088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key
	I1101 08:45:05.965142  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt with IP's: []
	I1101 08:45:06.097161  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt ...
	I1101 08:45:06.097197  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt: {Name:mke456d45c85355b327c605777e7e939bd178f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097374  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key ...
	I1101 08:45:06.097388  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key: {Name:mk96b8f9598bf40057b4d6b2c6e97a30a363b3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097558  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:45:06.097602  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:45:06.097627  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:45:06.097651  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 08:45:06.098363  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:45:06.130486  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:45:06.160429  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:45:06.189962  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:45:06.219452  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:45:06.250552  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:45:06.282860  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:45:06.313986  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:45:06.344383  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:45:06.376611  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:45:06.399751  535088 ssh_runner.go:195] Run: openssl version
	I1101 08:45:06.406933  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:45:06.421716  535088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427410  535088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427478  535088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.435363  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:45:06.449854  535088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:45:06.455299  535088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:45:06.455368  535088 kubeadm.go:401] StartCluster: {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:45:06.455464  535088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:45:06.455528  535088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:45:06.499318  535088 cri.go:89] found id: ""
	I1101 08:45:06.499395  535088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:45:06.513696  535088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:45:06.527370  535088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:45:06.541099  535088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:45:06.541122  535088 kubeadm.go:158] found existing configuration files:
	
	I1101 08:45:06.541170  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:45:06.553610  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:45:06.553677  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:45:06.567384  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:45:06.580377  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:45:06.580444  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:45:06.593440  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.605393  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:45:06.605460  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.618978  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:45:06.631411  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:45:06.631487  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:45:06.645452  535088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:45:06.719122  535088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:45:06.719190  535088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:45:06.829004  535088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:45:06.829160  535088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:45:06.829291  535088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:45:06.841691  535088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:45:06.866137  535088 out.go:252]   - Generating certificates and keys ...
	I1101 08:45:06.866269  535088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:45:06.866364  535088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:45:07.164883  535088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:45:07.767615  535088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:45:08.072088  535088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:45:08.514870  535088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:45:08.646331  535088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:45:08.646504  535088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.781122  535088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:45:08.781335  535088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.899420  535088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:45:09.007181  535088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:45:09.224150  535088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:45:09.224224  535088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:45:09.511033  535088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:45:09.752693  535088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:45:09.819463  535088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:45:10.005082  535088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:45:10.463552  535088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:45:10.464025  535088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:45:10.466454  535088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:45:10.471575  535088 out.go:252]   - Booting up control plane ...
	I1101 08:45:10.471714  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:45:10.471809  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:45:10.471913  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:45:10.490781  535088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:45:10.491002  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:45:10.498306  535088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:45:10.498812  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:45:10.498893  535088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:45:10.686796  535088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:45:10.686991  535088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:45:11.697343  535088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.005207328s
	I1101 08:45:11.699752  535088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:45:11.699949  535088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1101 08:45:11.700150  535088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:45:11.704134  535088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:45:13.981077  535088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.280860487s
	I1101 08:45:15.371368  535088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.67283221s
	I1101 08:45:17.198417  535088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501722237s
	I1101 08:45:17.211581  535088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:45:17.231075  535088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:45:17.253882  535088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:45:17.254137  535088 kubeadm.go:319] [mark-control-plane] Marking the node addons-994396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:45:17.268868  535088 kubeadm.go:319] [bootstrap-token] Using token: f9fr0l.j77e5jevkskl9xb5
	I1101 08:45:17.270121  535088 out.go:252]   - Configuring RBAC rules ...
	I1101 08:45:17.270326  535088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:45:17.277792  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:45:17.293695  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:45:17.296955  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:45:17.300284  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:45:17.303890  535088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:45:17.605222  535088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:45:18.065761  535088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:45:18.604676  535088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:45:18.605674  535088 kubeadm.go:319] 
	I1101 08:45:18.605802  535088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:45:18.605830  535088 kubeadm.go:319] 
	I1101 08:45:18.605992  535088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:45:18.606023  535088 kubeadm.go:319] 
	I1101 08:45:18.606068  535088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:45:18.606156  535088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:45:18.606234  535088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:45:18.606243  535088 kubeadm.go:319] 
	I1101 08:45:18.606321  535088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:45:18.606330  535088 kubeadm.go:319] 
	I1101 08:45:18.606402  535088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:45:18.606415  535088 kubeadm.go:319] 
	I1101 08:45:18.606489  535088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:45:18.606605  535088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:45:18.606702  535088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:45:18.606712  535088 kubeadm.go:319] 
	I1101 08:45:18.606815  535088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:45:18.606947  535088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:45:18.606965  535088 kubeadm.go:319] 
	I1101 08:45:18.607067  535088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607196  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 08:45:18.607233  535088 kubeadm.go:319] 	--control-plane 
	I1101 08:45:18.607244  535088 kubeadm.go:319] 
	I1101 08:45:18.607366  535088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:45:18.607389  535088 kubeadm.go:319] 
	I1101 08:45:18.607497  535088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607642  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 08:45:18.609590  535088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:45:18.609615  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:18.609625  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:18.611467  535088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:45:18.612559  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:45:18.629659  535088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:45:18.653188  535088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:45:18.653266  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:18.653283  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-994396 minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-994396 minikube.k8s.io/primary=true
	I1101 08:45:18.823964  535088 ops.go:34] apiserver oom_adj: -16
	I1101 08:45:18.824003  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.324429  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.824169  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.324357  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.825065  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.324643  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.824929  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.325055  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.824179  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.324346  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.422037  535088 kubeadm.go:1114] duration metric: took 4.768840437s to wait for elevateKubeSystemPrivileges
	I1101 08:45:23.422092  535088 kubeadm.go:403] duration metric: took 16.966730014s to StartCluster
	I1101 08:45:23.422117  535088 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.422289  535088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:45:23.422848  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.423145  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:45:23.423170  535088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:45:23.423239  535088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:45:23.423378  535088 addons.go:70] Setting yakd=true in profile "addons-994396"
	I1101 08:45:23.423402  535088 addons.go:239] Setting addon yakd=true in "addons-994396"
	I1101 08:45:23.423420  535088 addons.go:70] Setting inspektor-gadget=true in profile "addons-994396"
	I1101 08:45:23.423440  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.423457  535088 addons.go:239] Setting addon inspektor-gadget=true in "addons-994396"
	I1101 08:45:23.423459  535088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423473  535088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-994396"
	I1101 08:45:23.423435  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423491  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423507  535088 addons.go:70] Setting registry=true in profile "addons-994396"
	I1101 08:45:23.423518  535088 addons.go:239] Setting addon registry=true in "addons-994396"
	I1101 08:45:23.423522  535088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423539  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423555  535088 addons.go:70] Setting cloud-spanner=true in profile "addons-994396"
	I1101 08:45:23.423568  535088 addons.go:239] Setting addon cloud-spanner=true in "addons-994396"
	I1101 08:45:23.423606  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423731  535088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-994396"
	I1101 08:45:23.423760  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-994396"
	I1101 08:45:23.424125  535088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-994396"
	I1101 08:45:23.424214  535088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:23.424248  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423443  535088 addons.go:70] Setting metrics-server=true in profile "addons-994396"
	I1101 08:45:23.424283  535088 addons.go:239] Setting addon metrics-server=true in "addons-994396"
	I1101 08:45:23.424313  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423545  535088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-994396"
	I1101 08:45:23.424411  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424496  535088 addons.go:70] Setting ingress=true in profile "addons-994396"
	I1101 08:45:23.423498  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424512  535088 addons.go:239] Setting addon ingress=true in "addons-994396"
	I1101 08:45:23.424544  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425045  535088 addons.go:70] Setting registry-creds=true in profile "addons-994396"
	I1101 08:45:23.425074  535088 addons.go:239] Setting addon registry-creds=true in "addons-994396"
	I1101 08:45:23.425105  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425174  535088 addons.go:70] Setting volcano=true in profile "addons-994396"
	I1101 08:45:23.425210  535088 addons.go:239] Setting addon volcano=true in "addons-994396"
	I1101 08:45:23.425245  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423474  535088 addons.go:70] Setting default-storageclass=true in profile "addons-994396"
	I1101 08:45:23.425528  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-994396"
	I1101 08:45:23.425555  535088 addons.go:70] Setting gcp-auth=true in profile "addons-994396"
	I1101 08:45:23.425587  535088 addons.go:70] Setting volumesnapshots=true in profile "addons-994396"
	I1101 08:45:23.425594  535088 mustload.go:66] Loading cluster: addons-994396
	I1101 08:45:23.425605  535088 addons.go:239] Setting addon volumesnapshots=true in "addons-994396"
	I1101 08:45:23.425629  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425759  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.426001  535088 addons.go:70] Setting storage-provisioner=true in profile "addons-994396"
	I1101 08:45:23.426034  535088 addons.go:239] Setting addon storage-provisioner=true in "addons-994396"
	I1101 08:45:23.426060  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.426263  535088 addons.go:70] Setting ingress-dns=true in profile "addons-994396"
	I1101 08:45:23.426312  535088 addons.go:239] Setting addon ingress-dns=true in "addons-994396"
	I1101 08:45:23.426349  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.428071  535088 out.go:179] * Verifying Kubernetes components...
	I1101 08:45:23.430376  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:23.432110  535088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:45:23.432211  535088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:45:23.432239  535088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:45:23.432548  535088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-994396"
	I1101 08:45:23.433347  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.433599  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:45:23.433622  535088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:45:23.434372  535088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:45:23.434372  535088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:45:23.434372  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:45:23.434399  535088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	W1101 08:45:23.434936  535088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:45:23.434947  535088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:45:23.434397  535088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:45:23.435739  535088 addons.go:239] Setting addon default-storageclass=true in "addons-994396"
	I1101 08:45:23.435133  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.435780  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:45:23.435569  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.436246  535088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:45:23.436291  535088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:23.437459  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:45:23.436270  535088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:23.437541  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:45:23.437032  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:45:23.437636  535088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:45:23.437844  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:45:23.437918  535088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:45:23.437851  535088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:45:23.437941  535088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:23.438856  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:45:23.437976  535088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:23.438988  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:45:23.439032  535088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:45:23.439073  535088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:45:23.439539  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:45:23.439090  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:45:23.439094  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:45:23.439317  535088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:23.439929  535088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:45:23.439932  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:45:23.439957  535088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:45:23.439990  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:23.440001  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:45:23.440144  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:23.440159  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:45:23.442297  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.442308  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:45:23.442298  535088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:45:23.443272  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443791  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443933  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:23.443957  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:45:23.444059  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:23.444083  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:45:23.444856  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.444941  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.445160  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:45:23.445705  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.446038  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.446083  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.446929  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.448105  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:45:23.448713  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.449090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450028  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450296  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450327  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450341  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450369  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450600  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:45:23.451017  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451085  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451162  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451241  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451823  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.451855  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452155  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452274  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452437  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.452519  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452542  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452550  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452567  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452769  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453008  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453181  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453204  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453341  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:45:23.453485  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453526  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453547  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453582  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453698  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453748  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453776  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453961  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454247  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454637  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454592  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454668  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454765  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454810  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454640  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454828  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454953  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455189  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455476  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455511  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455565  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455603  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455714  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455949  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:45:23.456005  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.457369  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:45:23.457390  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:45:23.460387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.460852  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.460874  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.461072  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	W1101 08:45:23.763758  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763807  535088 retry.go:31] will retry after 294.020846ms: ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	W1101 08:45:23.763891  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763941  535088 retry.go:31] will retry after 247.932093ms: ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.987612  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:23.987618  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:45:24.391549  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:45:24.391592  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:45:24.396118  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:24.428988  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:45:24.429026  535088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:45:24.539937  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:24.542018  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:24.551067  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:24.578439  535088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:45:24.578476  535088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:45:24.590870  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:24.593597  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:45:24.593630  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:45:24.648891  535088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:24.648945  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:45:24.654530  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:24.691639  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:24.775174  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:24.894476  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:45:24.894518  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:45:25.110719  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:45:25.110755  535088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:45:25.248567  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:45:25.248606  535088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:45:25.251834  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:25.279634  535088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.279661  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:45:25.282613  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:25.356642  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:45:25.356672  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:45:25.596573  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:45:25.596609  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:45:25.610846  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:45:25.610885  535088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:45:25.674735  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.705462  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:25.705495  535088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:45:25.746878  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:45:25.746929  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:45:25.925617  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:25.925645  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:45:25.996036  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:45:25.996070  535088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:45:26.051328  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:26.240447  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:45:26.240483  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:45:26.408185  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:26.436460  535088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:26.436501  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:45:26.557448  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:45:26.557481  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:45:26.856571  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:27.059648  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:45:27.059683  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:45:27.286113  535088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.298454996s)
	I1101 08:45:27.286197  535088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.298476587s)
	I1101 08:45:27.286240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.890088886s)
	I1101 08:45:27.286229  535088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:45:27.286918  535088 node_ready.go:35] waiting up to 6m0s for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312278  535088 node_ready.go:49] node "addons-994396" is "Ready"
	I1101 08:45:27.312325  535088 node_ready.go:38] duration metric: took 25.37676ms for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312346  535088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:45:27.312422  535088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:27.686576  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:45:27.686612  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:45:27.792267  535088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-994396" context rescaled to 1 replicas
	I1101 08:45:28.140990  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:45:28.141032  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:45:28.704311  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:45:28.704352  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:45:29.292401  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:45:29.292429  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:45:29.854708  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:29.854740  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:45:30.288568  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:30.575091  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033025614s)
	I1101 08:45:30.862016  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:45:30.865323  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.865769  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:30.865797  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.866047  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:31.632521  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:45:31.806924  535088 addons.go:239] Setting addon gcp-auth=true in "addons-994396"
	I1101 08:45:31.807015  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:31.809359  535088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:45:31.813090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814762  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:31.814801  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814989  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:33.008057  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.456928918s)
	I1101 08:45:33.008164  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.417239871s)
	I1101 08:45:33.008205  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.35364594s)
	I1101 08:45:33.008240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.316568456s)
	I1101 08:45:33.008302  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.233079465s)
	I1101 08:45:33.008386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.756527935s)
	I1101 08:45:33.008524  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725858558s)
	I1101 08:45:33.008553  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.333786806s)
	W1101 08:45:33.008563  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008566  535088 addons.go:480] Verifying addon registry=true in "addons-994396"
	I1101 08:45:33.008586  535088 retry.go:31] will retry after 241.480923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008638  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.957281467s)
	I1101 08:45:33.008733  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.600492861s)
	I1101 08:45:33.008738  535088 addons.go:480] Verifying addon metrics-server=true in "addons-994396"
	I1101 08:45:33.010227  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.470250108s)
	I1101 08:45:33.010253  535088 addons.go:480] Verifying addon ingress=true in "addons-994396"
	I1101 08:45:33.011210  535088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-994396 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:45:33.011218  535088 out.go:179] * Verifying registry addon...
	I1101 08:45:33.012250  535088 out.go:179] * Verifying ingress addon...
	I1101 08:45:33.014024  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:45:33.015512  535088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:45:33.051723  535088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:45:33.051749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.051812  535088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:45:33.051833  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:45:33.111540  535088 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 08:45:33.250325  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:33.619402  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.619673  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:33.847569  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.990948052s)
	I1101 08:45:33.847595  535088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.535150405s)
	I1101 08:45:33.847621  535088 api_server.go:72] duration metric: took 10.424417181s to wait for apiserver process to appear ...
	I1101 08:45:33.847629  535088 api_server.go:88] waiting for apiserver healthz status ...
	W1101 08:45:33.847626  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.847652  535088 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1101 08:45:33.847651  535088 retry.go:31] will retry after 218.125549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.908865  535088 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1101 08:45:33.910593  535088 api_server.go:141] control plane version: v1.34.1
	I1101 08:45:33.910629  535088 api_server.go:131] duration metric: took 62.993472ms to wait for apiserver health ...
	I1101 08:45:33.910638  535088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:45:33.979264  535088 system_pods.go:59] 17 kube-system pods found
	I1101 08:45:33.979341  535088 system_pods.go:61] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:33.979358  535088 system_pods.go:61] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979373  535088 system_pods.go:61] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979381  535088 system_pods.go:61] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:33.979388  535088 system_pods.go:61] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:33.979401  535088 system_pods.go:61] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:33.979413  535088 system_pods.go:61] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:33.979421  535088 system_pods.go:61] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:33.979431  535088 system_pods.go:61] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:33.979438  535088 system_pods.go:61] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:33.979452  535088 system_pods.go:61] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:33.979468  535088 system_pods.go:61] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:33.979480  535088 system_pods.go:61] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:33.979501  535088 system_pods.go:61] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:33.979512  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending
	I1101 08:45:33.979522  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:33.979531  535088 system_pods.go:61] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:33.979545  535088 system_pods.go:74] duration metric: took 68.899123ms to wait for pod list to return data ...
	I1101 08:45:33.979563  535088 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:45:34.005592  535088 default_sa.go:45] found service account: "default"
	I1101 08:45:34.005620  535088 default_sa.go:55] duration metric: took 26.049347ms for default service account to be created ...
	I1101 08:45:34.005631  535088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:45:34.029039  535088 system_pods.go:86] 17 kube-system pods found
	I1101 08:45:34.029088  535088 system_pods.go:89] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:34.029098  535088 system_pods.go:89] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029109  535088 system_pods.go:89] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029116  535088 system_pods.go:89] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:34.029123  535088 system_pods.go:89] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:34.029128  535088 system_pods.go:89] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:34.029139  535088 system_pods.go:89] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:34.029144  535088 system_pods.go:89] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:34.029150  535088 system_pods.go:89] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:34.029156  535088 system_pods.go:89] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:34.029165  535088 system_pods.go:89] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:34.029173  535088 system_pods.go:89] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:34.029184  535088 system_pods.go:89] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:34.029194  535088 system_pods.go:89] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:34.029202  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:45:34.029211  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:34.029232  535088 system_pods.go:89] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:34.029244  535088 system_pods.go:126] duration metric: took 23.605903ms to wait for k8s-apps to be running ...
	I1101 08:45:34.029259  535088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:45:34.029328  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:34.057589  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.060041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:34.066143  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:34.536703  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.540613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.033279  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.057492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.517382  535088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.707985766s)
	I1101 08:45:35.519009  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:35.519008  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.230381443s)
	I1101 08:45:35.519151  535088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:35.520249  535088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:45:35.521386  535088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:45:35.522322  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:45:35.523075  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:45:35.523091  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:45:35.574185  535088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:45:35.574221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:35.574179  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.589220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.670403  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:45:35.670443  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:45:35.926227  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:35.926260  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:45:36.028744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.029084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.032411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:36.103812  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:36.521069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.523012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.530349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.024569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.026839  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.029801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.202891  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.952517264s)
	W1101 08:45:37.202946  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.202972  535088 retry.go:31] will retry after 301.106324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.203012  535088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.173650122s)
	I1101 08:45:37.203055  535088 system_svc.go:56] duration metric: took 3.173789622s WaitForService to wait for kubelet
	I1101 08:45:37.203071  535088 kubeadm.go:587] duration metric: took 13.779865062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:45:37.203102  535088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:45:37.208388  535088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:45:37.208416  535088 node_conditions.go:123] node cpu capacity is 2
	I1101 08:45:37.208429  535088 node_conditions.go:105] duration metric: took 5.320357ms to run NodePressure ...
	I1101 08:45:37.208441  535088 start.go:242] waiting for startup goroutines ...
	I1101 08:45:37.368099  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.301889566s)
	I1101 08:45:37.504488  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:37.521079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.521246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.528201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.991386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887518439s)
	I1101 08:45:37.992795  535088 addons.go:480] Verifying addon gcp-auth=true in "addons-994396"
	I1101 08:45:37.995595  535088 out.go:179] * Verifying gcp-auth addon...
	I1101 08:45:37.997651  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:45:38.013086  535088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:45:38.013118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.028095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.030768  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.041146  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:38.502928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.520170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.521930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.526766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.004207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.019028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.024223  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.031869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.206009  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.701470957s)
	W1101 08:45:39.206061  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.206085  535088 retry.go:31] will retry after 556.568559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.503999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.527340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.763081  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:40.006287  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.021411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.025825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.028609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:40.507622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.523293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.527164  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.530886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.005619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.021779  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.023058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.028879  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.134842  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371696885s)
	W1101 08:45:41.134889  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.134933  535088 retry.go:31] will retry after 634.404627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.501998  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.519483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.522699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.527571  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.769910  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:42.004958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.021144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.021931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.027195  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.501545  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.519865  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.522754  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.526903  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.775680  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00572246s)
	W1101 08:45:42.775745  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:42.775781  535088 retry.go:31] will retry after 1.084498807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:43.002944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.020356  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.020475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.134004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.504736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.519636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.520489  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.525810  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.861263  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:44.001829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.019292  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.026202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:44.503149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.520624  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.520651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.526211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:45:44.623495  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:44.623540  535088 retry.go:31] will retry after 1.856024944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:45.001600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.020242  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.022140  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.026024  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:45.507084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.523761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.524237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.529475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.005033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.108846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.109151  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.109369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.479732  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:46.503499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.520286  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.526234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.529155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.019094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.023015  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.027997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.507760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.519999  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.524925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.528391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.666049  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186267383s)
	W1101 08:45:47.666140  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:47.666174  535088 retry.go:31] will retry after 4.139204607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:48.003042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.019125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.027235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:48.031596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.722743  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.727291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.727372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.727610  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.004382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.019147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.021814  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.026878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:49.504442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.517916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.520088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.001964  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.024108  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.024120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.029503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.504014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.523676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.527259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.529569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.002796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.022756  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.022985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.026836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.501595  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.523272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.526829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.530749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.806085  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:52.003559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.019381  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.019451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.027431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:52.504756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.522177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.526818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.531367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.001310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.018845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.024989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.029380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.104383  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.298241592s)
	W1101 08:45:53.104437  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.104469  535088 retry.go:31] will retry after 2.354213604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.521260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.521459  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.530531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.465678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.465798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.466036  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:54.466159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.562014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.562454  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.001120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.025479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.025582  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.026324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:55.460012  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:55.504349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.519300  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.521013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.527541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.002846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.025053  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.029411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.032019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.575604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.575734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.577952  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.577981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.753301  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.293228646s)
	W1101 08:45:56.753349  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:56.753376  535088 retry.go:31] will retry after 4.355574242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:57.006174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.021087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.023942  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.029154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:57.505515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.520197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.523156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.018201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.022518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.025296  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.505701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.524023  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.526483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.536508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.001410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.017471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.020442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.025457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.501507  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.519043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.520094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.525760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.001248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.017563  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.020984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.026549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.501281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.519844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.521324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.525700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.001953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.020105  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.020877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.025885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.110059  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:01.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.519377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.523178  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.526440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:01.845885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:01.845957  535088 retry.go:31] will retry after 7.871379914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:02.001335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.019157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.021487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.026236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:02.502141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.517119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.519718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.526453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.002138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.017025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.019806  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.026770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.502833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.520032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.520118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.526559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.064971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.065055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.068066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.068526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.502308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.520197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.521585  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.526046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.003330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.017484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.026496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.501222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.517839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.520724  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.001368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.019614  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.020124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.025568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.500972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.518736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.520211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.526135  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.002092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.018836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.020757  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.025238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.503063  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.517984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.519990  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.528565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.018162  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.020563  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.026357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.501444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.517337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.519389  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.525929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.002578  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.018521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.020246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.026866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.518157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.519720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.527087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.718336  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:10.004096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.021038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.021333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.027767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:10.413712  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.413760  535088 retry.go:31] will retry after 19.114067213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.517730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.520404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.526363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.002849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.019496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.019995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.026025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.501655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.518007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.521219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.525426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.000873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.017867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.020240  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.026060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.502263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.518472  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.526084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.002272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.025249  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.501457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.518992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.520857  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.526486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.000572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.019408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.020492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.025038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.501826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.518060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.520198  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.526075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.002744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.018115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.019636  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.025834  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.501625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.518152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.519669  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.525079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.001990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.021114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.022918  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.025425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.501061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.519200  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.519212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.525882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.002326  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.017673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.020197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.026945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.502364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.518476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.520804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.526128  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.004541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.020439  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.028122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.502479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.519387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.519499  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.003038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.019735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.020844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.027661  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.519280  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.001793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.018442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.019878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.501246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.520476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.520774  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.525872  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.018221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.019989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.025817  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.501814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.518070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.520290  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.526096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.002018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.019705  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.021053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.026071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.501728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.519405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.520617  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.001744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.019715  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.020644  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.025597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.502175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.519303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.520222  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.526675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.001582  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.018997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.020524  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.025085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.501770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.519601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.520468  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.525222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.002719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.018650  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.020825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.026802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.501690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.517716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.520832  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.525983  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.002212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.017751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.019488  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.025775  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.501873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.519825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.526640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.001148  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.019815  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.025796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.502066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.518977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.520625  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.527501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.000982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.018045  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.019539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.026321  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.502967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.517882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.520453  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.525074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.002093  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.019794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.021920  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.025114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.502294  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.517914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.519213  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.526478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.528534  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:30.001669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.023801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.027674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.029691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:30.252885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.252962  535088 retry.go:31] will retry after 26.857733331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.501958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.518713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.001425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.019226  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.020064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.501882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.518669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.519450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.526794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.001295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.018253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.026067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.501521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.520301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.522051  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.526250  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.003215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.018591  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.020188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.026759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.518399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.520442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.526258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.001781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.019409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.026569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.501910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.518388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.526549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.002205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.019931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.501124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.517626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.519260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.526635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.001556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.017651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.020209  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.026600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.501047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.520391  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.001745  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.017677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.019854  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.504677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.518518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.519504  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.527753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.020360  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.026665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.501370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.517442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.519287  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.525990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.017774  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.019461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.026372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.500859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.519797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.520622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.525917  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.001647  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.017652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.019113  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.025818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.518504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.520340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.526037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.002231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.017533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.019687  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.025641  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.501410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.518018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.527062  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.018556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.020009  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.501909  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.519346  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.521539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.525544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.003422  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.020340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.026621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.501787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.517772  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.520385  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.526006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.001729  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.018572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.020505  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.027512  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.500861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.517878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.519941  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.525966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.002733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.022017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.023425  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.027913  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.501505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.518036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.518304  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.526497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.000839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.018027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.020574  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.025140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.517267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.519576  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.002664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.019029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.020440  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.026307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.502751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.518532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.525668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.001531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.017987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.018860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.501993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.519439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.520680  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.525869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.003110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.020088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.020281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.518761  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.520450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.526669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.019111  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.020657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.025651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.501137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.519077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.519422  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.526050  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.002264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.017514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.020444  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.026653  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.501218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.517606  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.519711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.525538  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.019403  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.027381  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.501030  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.519679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.520880  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.525311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.002074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.017920  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.020689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.025485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.501565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.518005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.518985  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.525510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.018972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.501041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.519696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.520156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.526253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.003167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.017108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.020966  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.025536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.501588  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.519412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.520387  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.526801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.001703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.019805  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.025874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.501547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.518508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.519409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.527341  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.001269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.017737  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.019765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.026345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.111554  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:57.502821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.521781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.523859  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:57.837380  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:46:57.837579  535088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:46:58.002477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.017866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.019513  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.025873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:58.501877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.518871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.519700  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.525438  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.004488  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.026436  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.031423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.033704  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.508129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.521490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.521737  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.526781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.003739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.022791  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.022910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.026491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.501517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.517703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.518550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.528527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.010322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.026679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.030087  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.030397  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.502386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.517530  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.522260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.532240  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.002156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.022137  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.023086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.026049  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.504322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.519252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.523461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.528764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.004016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.019471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.021442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.026419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.504419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.520406  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.525550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.002462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.020193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.021462  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.501642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.517490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.519930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.526445  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.005197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.023123  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.029475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.502664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.518118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.520518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.002738  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.019575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.022744  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.026515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.502554  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.519943  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.521590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.526208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.004023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.019789  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.020273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.026416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.504157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.518612  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.520773  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.527827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.007295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.020757  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.024258  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.031878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.505225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.518839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.521622  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.525366  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.003369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.024660  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.024787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.029399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.502978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.520074  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.520999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.527832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.002118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.019490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.019688  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.026021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.502365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.517980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.519426  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.000763  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.017778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.019554  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.025361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.502621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.519369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.520248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.525881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.001298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.019652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.020408  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.026077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.506179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.518698  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.520608  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.525646  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.004165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.021172  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.026558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.502399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.517614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.520163  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.526224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.002692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.018788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.502451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.519291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.520395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.528734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.001583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.017574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.019594  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.027073  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.502087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.518165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.518856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.526691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.002848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.019225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.020564  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.025778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.518991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.520609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.525245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.001845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.019346  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.019684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.026396  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.502188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.517746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.520856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.525856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.001858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.021348  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.026925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.517522  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.520124  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.525853  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.001850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.019071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.020953  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.502259  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.517542  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.520882  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.526825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.001558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.018927  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.020008  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.025511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.501320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.517732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.519487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.526814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.001370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.018101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.019530  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.501703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.519684  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.526074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.001809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.019534  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.025673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.501888  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.520695  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.521501  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.527625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.001636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.017676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.019410  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.026546  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.001469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.018681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.026297  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.500658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.517656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.520275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.526953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.002390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.018753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.021470  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.026724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.503080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.522083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.525703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.001480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.018730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.019775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.025922  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.501850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.518460  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.520597  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.526270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.002686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.017503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.019988  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.026061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.501773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.519208  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.519306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.526944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.001885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.020961  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.026254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.519603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.521180  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.526295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.003607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.018630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.021082  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.026312  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.501919  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.519736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.002036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.018828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.502329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.517607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.520177  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.527152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.003066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.020280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.020496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.026046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.503011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.519101  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.520154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.525819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.001349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.017760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.020383  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.026548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.501020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.519372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.520621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.001939  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.017981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.018721  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.025389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.502684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.519286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.519798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.526360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.001915  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.018089  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.018866  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.025884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.502109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.518315  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.520992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.001980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.020058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.020195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.502513  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.519131  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.519364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.526431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.001532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.017633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.019879  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.501267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.517441  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.519775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.526367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.002311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.017625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.025830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.502486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.518494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.519337  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.526256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.002200  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.017679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.020437  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.025635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.502121  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.518742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.519609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.525528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.001668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.017868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.019195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.027138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.502726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.518837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.519527  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.525448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.037966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.038824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.039617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.039888  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.510995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.611235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.611494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.612020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.007852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.104319  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.105167  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.106241  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.503207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.519701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.523717  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.528111  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.002832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.019368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.026027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.028968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.504592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.518781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.522913  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.527017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.021540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.022732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.027733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.501969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.523064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.523122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.526723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.016033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.048228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.048288  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.049707  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.510334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.517005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.520734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.527760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.002493  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.025067  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.025090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.030831  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.503106  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.519233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.522740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.526357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.003368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.021702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.023084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.025372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.507201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.528398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.528540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.528597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.005313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.021521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.023522  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.030205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.508306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.517975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.523254  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.528801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.004599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.018025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.024054  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.030295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.504150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.519937  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.527633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.003426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.021317  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.104457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.105285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.520941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.521038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.525762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.002168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.018353  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.019606  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.025332  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.501342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.518265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.520375  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.001482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.018509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.018674  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.026149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.502439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.518320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.519717  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.525114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.019121  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.026265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.501713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.525722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.001345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.020275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.025637  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.518670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.520663  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.525659  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.001263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.019116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.025335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.502071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.519000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.519010  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.525456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.021189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.026699  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.502333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.517379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.519350  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.526773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.001599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.018008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.020215  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.025828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.501455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.517521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.519235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.527201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.001827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.020037  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.020749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.025827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.503759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.517849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.520371  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.526800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.002360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.017843  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.026527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.501394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.520352  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.525725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.002102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.017074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.020520  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.026683  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.502383  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.517821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.520938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.525444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.004519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.104585  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.104625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.104775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.501109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.518462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.519031  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.018255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.019640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.025291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.503231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.518634  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.520274  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.526356  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.002389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.018529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.019411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.026657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.501043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.518076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.519080  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.526504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.001361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.019762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.022333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.025239  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.501714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.519163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.521149  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.526410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.000747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.019676  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.020330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.026159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.502467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.518491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.518845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.525769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.001664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.019454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.019620  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.027022  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.502850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.518666  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.520316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.526009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.002470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.017750  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.019816  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.025697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.501760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.519481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.519738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.525711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.001752  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.017749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.019804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.025660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.501792  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.519794  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.525244  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.002742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.018517  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.020369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.026630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.501587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.518305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.519219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.526380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.000977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.018805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.019761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.501890  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.517987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.520782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.525601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.001949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.018921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.020592  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.026413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.501660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.518677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.518948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.525564  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.017692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.019759  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.025724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.503245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.519474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.520078  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.525649  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.002655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.017994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.020743  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.025544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.500866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.519004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.520797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.527102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.001891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.019380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.020948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.025584  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.502039  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.519170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.520827  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.002597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.018456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.019344  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.501808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.518199  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.520114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.000809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.017935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.019860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.026010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.502293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.517549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.520189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.603271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.001815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.018392  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.020440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.025577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.501456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.517675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.519938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.525413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.000943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.017838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.021846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.026719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.502498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.518370  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.526307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.002824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.019386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.027193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.501577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.518262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.525078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.002037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.020156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.021197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.025423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.501921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.519607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.520544  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.001960  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.018434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.020315  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.026179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.518911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.520556  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.525269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.002029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.024168  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.026997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.031803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.502358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.518786  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.525830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.017338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.018324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.503054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.520388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.521916  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.526202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.002517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.020216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.021156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.028984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.500976  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.519154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.519316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.526809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.002882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.019205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.020141  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.026965  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.501036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.518337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.519991  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.525486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.018947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.019127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.025725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.501581  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.518560  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.520017  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.525518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.001825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.018331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.020369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.026403  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.501127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.519632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.520978  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.525884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.002361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.027021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.525535  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.002688  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.017322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.019838  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.025328  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.501474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.517324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.519128  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.525804  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.001640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.017615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.019699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.025407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.501333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.518228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.520320  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.526401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.001257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.017769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.019813  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.501852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.517912  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.519457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.525502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.001036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.018891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.019341  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.501891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.517945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.519845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.525477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.018364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.019047  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.025949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.501632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.517753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.519551  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.525075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.019109  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.021003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.025940  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.503032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.518866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.520801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.525566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.002115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.017835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.020583  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.026191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.502465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.517620  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.520272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.000870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.018932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.019718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.025748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.502491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.525784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.001520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.020061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.026348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.519550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.519863  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.526033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.001475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.018365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.019331  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.026308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.502572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.517421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.520211  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.525925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.001941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.019309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.020493  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.027497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.501822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.520262  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.526454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.003835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.019771  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.020317  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.025953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.501469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.517769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.519531  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.526394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.001467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.018767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.018975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.025574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.501327  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.517147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.519793  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.525870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.001711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.019756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.022733  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.025432  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.501110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.520152  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.526331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.001665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.018212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.020818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.027301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.502145  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.518137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.520139  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.002613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.018231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.019849  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.501054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.518385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.526209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.017824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.020797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.026068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.501618  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.519136  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.519498  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.526198  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.001727  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.019695  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.020007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.026210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.502382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.518209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.520090  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.526008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.002275  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.017575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.020217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.026182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.501858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.518887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.520199  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.525849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.001391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.017528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.019856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.026978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.502108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.517185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.519497  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.526193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.002439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.026369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.502252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.518245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.519830  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.525789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.002157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.017975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.020029  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.026100  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.504825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.517735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.522486  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.528548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.005615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.019305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.021640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.027410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.501443  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.519328  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.519829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.526094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.001398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.019374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.020621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.024951  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.501419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.517860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.519006  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.525945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.017274  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.019058  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.501980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.517824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.519466  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.001604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.019698  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.501302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.517854  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.519844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.526844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.017746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.020114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.519009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.520308  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.525824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.001176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.017056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.019336  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.026011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.502015  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.518868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.519785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.525794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.002253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.017282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.020639  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.026305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.518058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.519766  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.525982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.018418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.021050  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.026140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.502619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.517497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.519971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.526180  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.002367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.018215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.020881  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.025867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.502163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.518906  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.519560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.525238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.002160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.018131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.019720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.026035  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.501498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.517861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.520038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.525911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.008043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.108599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.108605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.108940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.501986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.519116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.519363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.526237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.002941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.019968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.026086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.501165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.518371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.519716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.526191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.003221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.017756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.020569  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.025532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.502303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.517833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.520043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.526299  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.001963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.019603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.020175  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.026074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.518548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.526362  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.001337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.017680  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.020642  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.025160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.501481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.519187  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.519354  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.526002  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.001164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.017266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.020018  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.025815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.501835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.518458  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.519449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.526988  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.001942  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.017559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.019230  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.027617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.501568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.518953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.519722  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.000827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.017696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.019798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.501984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.519229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.525931  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.002067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.018520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.020314  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.026702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.501478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.518992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.519109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.525577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.001061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.019049  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.019914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.025870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.501375  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.517502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.520013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.525860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.002219  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.018451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.019784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.025779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.503078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.519485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.528833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.001789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.017702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.019708  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.025298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.517966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.520785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.526958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.017726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.019345  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.026841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.501551  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.520217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.526558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.001536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.018736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.020611  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.025440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.519745  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.526510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.002283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.017864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.019800  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.025916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.502006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.519062  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.519655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.005447  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.017234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.019831  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.026557  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.501996  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.519856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.520083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.525230  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.002748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.019533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.025957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.502580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.519968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.525850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.019152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.019529  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.025144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.503036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.518401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.520738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.525739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.001970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.026543  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.505234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.517615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.520770  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.525690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.018177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.019004  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.025710  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.502094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.519521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.520380  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.526127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.002068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.020224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.021127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.025520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.501694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.518963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.520765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.525058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.007417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.024784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.025732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.520851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.521975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.528716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.002656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.019037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.501702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.521095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.526859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.002583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.020457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.025456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.502095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.518464  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.522059  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.526260  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.003337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.017841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.021116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.025850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.518762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.520412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.527410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.001848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.018927  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.019525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.501555  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.518984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.519924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.526028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.002318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.018839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.021112  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.025766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.501254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.518654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.520701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.525608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.001830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.017870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.020014  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.026744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.501677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.519613  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.519874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.526220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.002947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.019118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.020560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.025161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.501842  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.519678  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.003014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.018826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.020409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.026088  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.501916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.518127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.520850  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.525382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.001229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.017453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.019095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.026360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.502510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.517380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.518702  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.001216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.018086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.020349  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.026668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.502075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.518995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.519726  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.526262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.011176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.018083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.022218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.026390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.518961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.519981  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.525961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.002956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.018416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.020053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.026871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.503382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.518628  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.520030  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.526081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.004511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.017733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.019809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.026157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.502455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.517764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.519007  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.525748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.002201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.018354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.020561  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.024986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.501676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.518080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.520259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.526231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.002290  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.017246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.019747  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.025424  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.502256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.519181  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.519361  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.526313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.001733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.017924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.019432  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.024916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.501788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.518994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.520329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.526158  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.002306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.017816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.020329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.026122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.502214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.517689  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.519368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.526566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.001344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.018348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.021395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.026118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.502411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.519218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.519487  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.526004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.002233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.017415  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.020521  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.026057  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.518860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.520188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.526090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.002091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.018506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.019711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.025910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.502421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.518400  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.521296  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.527921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.003104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.018378  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.020878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.026161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.518686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.520170  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.525923  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.004390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.019175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.022158  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.026467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.504086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.520367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.520550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.525380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.002978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.103477  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.103494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.104185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.502233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.519809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.000496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.018444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.019039  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.026510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.502226  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.517482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.520689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.525876  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.001596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.021682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.025805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.517889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.520740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.526273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.001808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.018410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.020658  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.025282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.502482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.517540  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.520502  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.525363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.002384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.018017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.020110  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.026034  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.505672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.520527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.523748  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.529163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.002861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.017744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.019716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.025934  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.503141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.517174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.519166  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.001342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.017719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.020032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.026547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.501789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.519072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.519782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.525316  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.002325  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.017470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.021020  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.026334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.504006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.518610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.525227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.003295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.018224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.023940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.028747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.507809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.522785  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.523541  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.527593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.006856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.021835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.023449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.029978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.506277  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.523013  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.524326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.531084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.006985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.018665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.023247  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.026006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.503056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.519576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.522065  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.003139  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.020881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.022886  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.028847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.502733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.521726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.530711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.532556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.002638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.021902  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.026061  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.027811  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.501943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.518059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.520358  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.527803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.001212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.022110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.023066  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.027074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.511753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.522407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.525249  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.528427  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.003779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.019398  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.020765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.025087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.502271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.519021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.520012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.028122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.028948  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.029097  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.503552  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.519454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.526099  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.528549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.002150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.018589  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.020579  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:50:00.026070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.503019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.518818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.521298  535088 kapi.go:107] duration metric: took 4m27.50578325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:50:00.526236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.004597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.017417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.026007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.503117  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.526118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.002140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.017309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.026874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.517206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.526479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.002066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.018800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.026667  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.501870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.526907  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.001943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.018110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.026258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.503167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.518066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.526754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.007821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.017748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.025450  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.501643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.518495  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.001380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.017918  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.026946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.502671  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.518784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.526820  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.001754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.019448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.502164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.517678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.526283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.002858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.019273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.027420  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.501670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.518047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.526214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.001840  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.018206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.027687  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.501188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.526417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.001069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.018157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.026212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.502289  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.518055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.526968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.001635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.017991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.025970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.506621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.517412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.001701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.018119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.025969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.502625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.517475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.526044  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.002186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.018439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.026091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.519505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.525838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.018285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.027576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.501280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.517529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.526733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.002377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.018228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.502885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.517651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.527123  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.001756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.018508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.026298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.503500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.526229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.005499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.105592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.105644  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.501723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.518760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.525930  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.009252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.020798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.026084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.502008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.518188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.526054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.001524  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.017526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.026186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.501501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.517658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.526525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.001537  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.017379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.027037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.501883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.518635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.525619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.001489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.026672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.501586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.517885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.526477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.000991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.019224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.027309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.502253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.526007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.002357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.017858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.027027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.500869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.517747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.526047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.002561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.018227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.027043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.502430  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.518125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.526108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.002567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.017833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.025933  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.517859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.526354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.000814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.017887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.026568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.502946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.518678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.526480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.001266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.017216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.026609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.501961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.526911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.002183  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.017072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.026509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.503467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.525800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.001730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.018081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.026318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.503000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.518477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.525663  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.001609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.018380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.027170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.502338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.518067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.526337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.001716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.019042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.026553  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.502516  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.517742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.526076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.003220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.017115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.026003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.503084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.520638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.525815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.002310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.017855  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.026358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.501484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.518215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.527345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.001194  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.018531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.026371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.518822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.526665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.000987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.018881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.026261  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.503065  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.519434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.002048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.019887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.026789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.502205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.527132  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.027636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.502137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.518679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.526770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.002674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.018502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.025131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.502841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.518479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.525394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.003210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.017479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.026633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.501409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.517624  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.525765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.001504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.017795  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.026635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.504580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.518573  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.526384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.000864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.018489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.025191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.501782  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.518173  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.526463  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.000518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.017873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.027131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.502017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.518539  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.526000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.002999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.018398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.027329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.501816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.518023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.526878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.002714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.018483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.026808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.502514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.517486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.525494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.000916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.017682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.026270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.504311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.517633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.529587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.005819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.019419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:46.028247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.501836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.603570  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.604017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.002957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.020722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.103677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:47.504417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.529109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.535255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.027116  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.027384  535088 kapi.go:107] duration metric: took 5m10.029733807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:50:48.029168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.029460  535088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-994396 cluster.
	I1101 08:50:48.030850  535088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:50:48.032437  535088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:50:48.524544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.531119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.018726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.026282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.518154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.526614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.018751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.026031  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.518756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.018153  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.518286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.017371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.027754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.518074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.526416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.018974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.026602  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.518144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.526654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.018625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.026704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.517492  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.525999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.019257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.027958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.518075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.526142  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.018092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.025605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.518596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.525863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.017562  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.025851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.518709  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.526387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.025978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.517643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.525642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.018664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.025863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.517006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.527349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.020576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.029108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.518333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.527511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.018504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.027157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.518405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.526704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.018500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.026694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.517768  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.526967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.018243  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.026700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.526719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.017510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.025944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.517662  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.526213  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.019140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.522889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.526826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.017784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.026272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.517992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.527109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.018586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.026175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.518974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.526376  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.018995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.026615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.517947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.526011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.018511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.025631  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.518218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.526593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.018682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.026784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.527301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.018993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.518483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.526408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.018208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.027483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.518108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.528506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.018723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.026036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.519547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.525883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.017886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.026485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.518428  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.526099  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.018816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.028223  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.517235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.019497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.026823  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.518374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.526536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.019643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.026636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.519221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.527357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.018310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.027561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.517385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.526970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.018802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.026280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.527610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.017707  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.028465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.518519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.526293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:21.026625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:21.030779  535088 kapi.go:107] duration metric: took 5m45.508455317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:51:21.518734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.018071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.517851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.022943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.518235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.018970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.517611  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.019971  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.519134  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.018419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.518767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.018701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.519283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.019085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.518032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.019182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.519048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.018264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.018124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.021956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.519959  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:33.014506  535088 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1101 08:51:33.014547  535088 kapi.go:107] duration metric: took 6m0.000528296s to wait for kubernetes.io/minikube-addons=registry ...
	W1101 08:51:33.014668  535088 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1101 08:51:33.016548  535088 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:51:33.017988  535088 addons.go:515] duration metric: took 6m9.594756816s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1101 08:51:33.018036  535088 start.go:247] waiting for cluster config update ...
	I1101 08:51:33.018057  535088 start.go:256] writing updated cluster config ...
	I1101 08:51:33.018363  535088 ssh_runner.go:195] Run: rm -f paused
	I1101 08:51:33.027702  535088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:33.035072  535088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.039692  535088 pod_ready.go:94] pod "coredns-66bc5c9577-2rqh8" is "Ready"
	I1101 08:51:33.039727  535088 pod_ready.go:86] duration metric: took 4.614622ms for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.041954  535088 pod_ready.go:83] waiting for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.046075  535088 pod_ready.go:94] pod "etcd-addons-994396" is "Ready"
	I1101 08:51:33.046103  535088 pod_ready.go:86] duration metric: took 4.126087ms for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.048189  535088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.052772  535088 pod_ready.go:94] pod "kube-apiserver-addons-994396" is "Ready"
	I1101 08:51:33.052802  535088 pod_ready.go:86] duration metric: took 4.587761ms for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.055446  535088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.433771  535088 pod_ready.go:94] pod "kube-controller-manager-addons-994396" is "Ready"
	I1101 08:51:33.433801  535088 pod_ready.go:86] duration metric: took 378.329685ms for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.634675  535088 pod_ready.go:83] waiting for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.034403  535088 pod_ready.go:94] pod "kube-proxy-fbmdq" is "Ready"
	I1101 08:51:34.034444  535088 pod_ready.go:86] duration metric: took 399.738812ms for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.233978  535088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633095  535088 pod_ready.go:94] pod "kube-scheduler-addons-994396" is "Ready"
	I1101 08:51:34.633131  535088 pod_ready.go:86] duration metric: took 399.109096ms for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633149  535088 pod_ready.go:40] duration metric: took 1.605381934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:34.682753  535088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:51:34.684612  535088 out.go:179] * Done! kubectl is now configured to use "addons-994396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.490385910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987493490353917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de147c97-4d68-469d-b13b-d93215a7e2ca name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.491588687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ddf054d-9201-496a-98bd-7b2962fff955 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.492023593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ddf054d-9201-496a-98bd-7b2962fff955 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.492752554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ddf054d-9201-496a-98bd-7b2962fff955 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.504810466Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=491544d0-c2fc-4247-bf36-3abec1457ad6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.506361892Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a688e95ff774d333d03aeba9040f6474240997aeddde89b8afd82798cc9e706,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9c49ac5d-18e5-470b-9217-c0a58f0636a1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987369396636901,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c49ac5d-18e5-470b-9217-c0a58f0636a1,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:56:09.077414941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d45873adc32c059e48321204580348724fe2849e18f32a716c6a20a49980c0f0,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,Uid:e25da403-345f-40f6-b6f9-e28731089dd6,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1761987329226197208,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e25da403-345f-40f6-b6f9-e28731089dd6,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:55:28.903715198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5a1f5307a5a0e8d620f46ea3fb4500fae706cd5d81b910f9344a2dc34840763,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:8623da74-791e-4fd6-a974-60ebca5738a7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987164436439077,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8623da74-791e-4fd6-a974-60ebca5738a7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:52:44.116093759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox
{Id:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&PodSandboxMetadata{Name:busybox,Uid:4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987095651519563,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:51:35.327103269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-9cxnd,Uid:bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986982879427207,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-aut
h-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.720554779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:cf63ab79-b3fa-4917-a62b-a0758d1521b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986738627441276,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917
-a62b-a0758d1521b0,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:35.497727216Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736970680925,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-at
tacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:35.165829458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-7l7ps,Uid:a1c291ec-002e-43dc-acb1-5bc4483fa6fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736808163856,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-
01T08:45:35.283625413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-jbkmr,Uid:19dc2ae7-668b-4952-9c2d-6602eac4449e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736781212367,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:33.962278007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-2pbx5,Uid:e9e973a4-20dd-4785-a3d6-1557c012cc76,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Created
At:1761986735686254069,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:33.919600116Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-6ptqs,Uid:9fe7abf8-c7e2-47ee-ac99-699c34674a22,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761986733549694504,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 608bce68-2083-4523-b519-13c4d6cad8fa,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid:
608bce68-2083-4523-b519-13c4d6cad8fa,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.773581153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-dmt9r,Uid:7e49bedc-b72d-400d-bc07-62040e55ac39,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761986733206850623,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: bb2b857a-ecad-44c0-93d8-e9ecb84ec3bf,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: bb2b857a-ecad-44c0-93d8
-e9ecb84ec3bf,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.824364839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&PodSandboxMetadata{Name:gadget-z8nnd,Uid:c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986732252775766,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/
config.seen: 2025-11-01T08:45:31.810689200Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-9ghvj,Uid:d3c3231a-40d9-42f1-bc78-e2d1a104327a,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731585408537,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:30.990687010Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&PodSandboxMetadata{Name:storage-provis
ioner,Uid:a0182754-0c9c-458b-a340-20ec025cb56c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731574668336,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"se
rviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:30.530361901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:d947f942-2149-492a-9b4e-1f9c22405815,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731411874379,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-
system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:29.770167923Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&PodSandboxMetadata{Name
:registry-proxy-bzs78,Uid:151e456a-63e0-4527-8511-34c4444fef48,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731364422760,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 65b944f647,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.495875265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b06b6cc06bc5fa49dc1e6aa03c98e75401763147b91202b99f1d103ce1ee29d2,Metadata:&PodSandboxMetadata{Name:registry-6b586f9694-b4ph6,Uid:f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731333681368,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: registry-6b586f9694-b4ph6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,kubernetes.io/minikube-addons: registry,pod-template-hash: 6b586f9694,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.152437473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-vssmp,Uid:a3b8c16e-b583-47df-a5c2-97218d3ec5be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986727049009432,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]strin
g{kubernetes.io/config.seen: 2025-11-01T08:45:26.718957327Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2rqh8,Uid:b131b2b2-f9b9-4197-8bc7-4d1bc185c804,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986724017093656,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.654384746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&PodSandboxMetadata{Name:kube-proxy-fbmdq,Uid:dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:17619867238
55325038,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.475753329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&PodSandboxMetadata{Name:etcd-addons-994396,Uid:31d081dd6df6b55662a095a017ad5712,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711956221288,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.
39.195:2379,kubernetes.io/config.hash: 31d081dd6df6b55662a095a017ad5712,kubernetes.io/config.seen: 2025-11-01T08:45:11.165275870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-994396,Uid:5912e2b5f9c4192157a57bf3d5021f7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711949626239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5912e2b5f9c4192157a57bf3d5021f7e,kubernetes.io/config.seen: 2025-11-01T08:45:11.165273714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&
PodSandboxMetadata{Name:kube-scheduler-addons-994396,Uid:e0eeda84be59c6c1c023d04bf2f88758,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711947877914,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0eeda84be59c6c1c023d04bf2f88758,kubernetes.io/config.seen: 2025-11-01T08:45:11.165274783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-994396,Uid:abcff5cb337834c6fd7a11d68a6b7be4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711944495415,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-add
ons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: abcff5cb337834c6fd7a11d68a6b7be4,kubernetes.io/config.seen: 2025-11-01T08:45:11.165269521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=491544d0-c2fc-4247-bf36-3abec1457ad6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.510382263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4de26a0f-4c5c-44c9-9c56-49517726667d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.510462572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4de26a0f-4c5c-44c9-9c56-49517726667d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.511034237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4de26a0f-4c5c-44c9-9c56-49517726667d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.522369355Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=33f47d57-a770-404e-aa2b-585616c78b1b name=/runtime.v1.RuntimeService/Status
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.522457586Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=33f47d57-a770-404e-aa2b-585616c78b1b name=/runtime.v1.RuntimeService/Status
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.541882649Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=487b7066-271e-4aed-821f-7b3ee0c8aa40 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.542101952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=487b7066-271e-4aed-821f-7b3ee0c8aa40 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.543845538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46e50f40-af20-42b3-a350-6bb56e461384 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.545798309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987493545754086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46e50f40-af20-42b3-a350-6bb56e461384 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.547089635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abf55720-3d49-4285-ada7-be5a78d44638 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.547199228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abf55720-3d49-4285-ada7-be5a78d44638 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.549406085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abf55720-3d49-4285-ada7-be5a78d44638 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.598107899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=382f17d2-e003-4481-beae-0a3071e9d0f4 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.598183668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=382f17d2-e003-4481-beae-0a3071e9d0f4 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.600485857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b245d9fa-228f-48be-a4aa-8be40b366e47 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.601576409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987493601549689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b245d9fa-228f-48be-a4aa-8be40b366e47 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.602344374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5745d9fa-6193-4bde-b5b5-5f31c671311f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.602407494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5745d9fa-6193-4bde-b5b5-5f31c671311f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:13 addons-994396 crio[817]: time="2025-11-01 08:58:13.603003442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5745d9fa-6193-4bde-b5b5-5f31c671311f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9aac7eb346903       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   cdbcecc3e9d43       busybox
	8c914a21ca5c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	437ef3bce50ac       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	f73cee1644b03       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             8 minutes ago       Running             controller                               0                   147663b03fe63       ingress-nginx-controller-675c5ddd98-9cxnd
	862808e2ff30f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	a4eac7bee2514       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	89e19f39781eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	68bf99b640c16       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   c090988aa5e05       csi-hostpath-resizer-0
	39137378c3801       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   6eaf5e212ad1c       csi-hostpath-attacher-0
	80b7ac026d755       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   5ef1abbd77f24       snapshot-controller-7d9fbc56b8-jbkmr
	a63011b6ec66f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   eeeab7772fb0e       snapshot-controller-7d9fbc56b8-2pbx5
	6e0352b147e8a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	7fbb154c5ba00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   10 minutes ago      Exited              patch                                    0                   058d4f2c90db7       ingress-nginx-admission-patch-dmt9r
	5e6c68a57ee53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   10 minutes ago      Exited              create                                   0                   a5dfb28615faf       ingress-nginx-admission-create-6ptqs
	6d2226436f827       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   c449271f0824b       registry-proxy-bzs78
	dda41d22ea7ff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            11 minutes ago      Running             gadget                                   0                   e07af8e7a3eca       gadget-z8nnd
	9b56bd6c195bd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             11 minutes ago      Running             local-path-provisioner                   0                   6d69749ca9bc7       local-path-provisioner-648f6765c9-9ghvj
	7b4c1be283a7f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               12 minutes ago      Running             minikube-ingress-dns                     0                   9f7ac0dd48cc1       kube-ingress-dns-minikube
	2ad7748982f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             12 minutes ago      Running             storage-provisioner                      0                   ca1dd787f338a       storage-provisioner
	9bb5f4d4e768d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     12 minutes ago      Running             amd-gpu-device-plugin                    0                   fec37181f6706       amd-gpu-device-plugin-vssmp
	9d0ff7b8e8784       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             12 minutes ago      Running             coredns                                  0                   d62d15d11c495       coredns-66bc5c9577-2rqh8
	9d0a2f86b38f4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             12 minutes ago      Running             kube-proxy                               0                   e1fb2fcb1123b       kube-proxy-fbmdq
	80489befa62b8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             13 minutes ago      Running             kube-scheduler                           0                   d4cfa30f1a32a       kube-scheduler-addons-994396
	844d913e662bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             13 minutes ago      Running             etcd                                     0                   e2f739ab181cd       etcd-addons-994396
	35bb45a49c1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             13 minutes ago      Running             kube-controller-manager                  0                   80615bf9878bb       kube-controller-manager-addons-994396
	fdeec4098b47d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             13 minutes ago      Running             kube-apiserver                           0                   f1c88f09470e5       kube-apiserver-addons-994396
	
	
	==> coredns [9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1] <==
	[INFO] 10.244.0.8:50179 - 63668 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00005676s
	[INFO] 10.244.0.8:51425 - 18045 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000164513s
	[INFO] 10.244.0.8:51425 - 62727 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000144892s
	[INFO] 10.244.0.8:51425 - 25601 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000152426s
	[INFO] 10.244.0.8:51425 - 21453 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000198347s
	[INFO] 10.244.0.8:51425 - 49495 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00010407s
	[INFO] 10.244.0.8:51425 - 6587 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119177s
	[INFO] 10.244.0.8:51425 - 35947 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000150925s
	[INFO] 10.244.0.8:51425 - 38573 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000332618s
	[INFO] 10.244.0.8:60420 - 44002 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000460243s
	[INFO] 10.244.0.8:60420 - 17090 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000249662s
	[INFO] 10.244.0.8:60420 - 8486 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000080701s
	[INFO] 10.244.0.8:60420 - 14925 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000360991s
	[INFO] 10.244.0.8:60420 - 55869 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000068771s
	[INFO] 10.244.0.8:60420 - 20941 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000061503s
	[INFO] 10.244.0.8:60420 - 3261 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000183969s
	[INFO] 10.244.0.8:60420 - 14997 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014118s
	[INFO] 10.244.0.8:35149 - 32472 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000234249s
	[INFO] 10.244.0.8:35149 - 11226 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000253372s
	[INFO] 10.244.0.8:35149 - 20761 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000111258s
	[INFO] 10.244.0.8:35149 - 12086 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000208249s
	[INFO] 10.244.0.8:35149 - 24726 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000116798s
	[INFO] 10.244.0.8:35149 - 25398 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106778s
	[INFO] 10.244.0.8:35149 - 60141 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000111922s
	[INFO] 10.244.0.8:35149 - 10123 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000153057s
	
	
	==> describe nodes <==
	Name:               addons-994396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-994396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-994396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-994396
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-994396"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:45:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-994396
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-994396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 47158355a9594cbf84ea23a10000597a
	  System UUID:                47158355-a959-4cbf-84ea-23a10000597a
	  Boot ID:                    8b22796c-545f-4b51-954a-eb39441cd160
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	  gadget                      gadget-z8nnd                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9cxnd                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 amd-gpu-device-plugin-vssmp                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-2rqh8                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-7l7ps                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-994396                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-994396                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-994396                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-fbmdq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-994396                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-6b586f9694-b4ph6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-bzs78                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-2pbx5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-7d9fbc56b8-jbkmr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  local-path-storage          local-path-provisioner-648f6765c9-9ghvj                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-994396 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-994396 event: Registered Node addons-994396 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:46] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 08:47] kauditd_printk_skb: 32 callbacks suppressed
	[ +34.333332] kauditd_printk_skb: 101 callbacks suppressed
	[  +3.822306] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.002792] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 1 08:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.240953] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 08:50] kauditd_printk_skb: 17 callbacks suppressed
	[ +34.452421] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:51] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.931610] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 08:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.008516] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.922747] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.151130] kauditd_printk_skb: 37 callbacks suppressed
	[ +11.857033] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 08:54] kauditd_printk_skb: 26 callbacks suppressed
	[ +40.501255] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:55] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 08:56] kauditd_printk_skb: 45 callbacks suppressed
	[Nov 1 08:57] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde] <==
	{"level":"info","ts":"2025-11-01T08:47:54.978149Z","caller":"traceutil/trace.go:172","msg":"trace[879398792] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1248; }","duration":"128.792993ms","start":"2025-11-01T08:47:54.849340Z","end":"2025-11-01T08:47:54.978133Z","steps":["trace[879398792] 'read index received'  (duration: 128.787273ms)","trace[879398792] 'applied index is now lower than readState.Index'  (duration: 4.859µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:47:54.978274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.918573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:47:54.978294Z","caller":"traceutil/trace.go:172","msg":"trace[478888116] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1194; }","duration":"128.951874ms","start":"2025-11-01T08:47:54.849337Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[478888116] 'agreement among raft nodes before linearized reading'  (duration: 128.896473ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:54.978301Z","caller":"traceutil/trace.go:172","msg":"trace[127276739] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"193.938157ms","start":"2025-11-01T08:47:54.784350Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[127276739] 'process raft request'  (duration: 193.811655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:03.807211Z","caller":"traceutil/trace.go:172","msg":"trace[306428088] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"143.076836ms","start":"2025-11-01T08:50:03.664107Z","end":"2025-11-01T08:50:03.807184Z","steps":["trace[306428088] 'process raft request'  (duration: 142.860459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:30.399983Z","caller":"traceutil/trace.go:172","msg":"trace[417490432] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"105.005558ms","start":"2025-11-01T08:50:30.294965Z","end":"2025-11-01T08:50:30.399970Z","steps":["trace[417490432] 'process raft request'  (duration: 104.840267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785305Z","caller":"traceutil/trace.go:172","msg":"trace[446064097] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1675; }","duration":"202.139299ms","start":"2025-11-01T08:51:25.583130Z","end":"2025-11-01T08:51:25.785270Z","steps":["trace[446064097] 'read index received'  (duration: 202.133895ms)","trace[446064097] 'applied index is now lower than readState.Index'  (duration: 4.594µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:51:25.785474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.320618ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:51:25.785498Z","caller":"traceutil/trace.go:172","msg":"trace[2127751376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1576; }","duration":"202.392505ms","start":"2025-11-01T08:51:25.583101Z","end":"2025-11-01T08:51:25.785493Z","steps":["trace[2127751376] 'agreement among raft nodes before linearized reading'  (duration: 202.298341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785518Z","caller":"traceutil/trace.go:172","msg":"trace[25251410] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"230.552599ms","start":"2025-11-01T08:51:25.554955Z","end":"2025-11-01T08:51:25.785507Z","steps":["trace[25251410] 'process raft request'  (duration: 230.448007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027453Z","caller":"traceutil/trace.go:172","msg":"trace[1612683542] linearizableReadLoop","detail":"{readStateIndex:1872; appliedIndex:1872; }","duration":"169.871386ms","start":"2025-11-01T08:52:17.857553Z","end":"2025-11-01T08:52:18.027424Z","steps":["trace[1612683542] 'read index received'  (duration: 169.865757ms)","trace[1612683542] 'applied index is now lower than readState.Index'  (duration: 4.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:18.027601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.004057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:18.027618Z","caller":"traceutil/trace.go:172","msg":"trace[354966435] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1760; }","duration":"170.064613ms","start":"2025-11-01T08:52:17.857549Z","end":"2025-11-01T08:52:18.027613Z","steps":["trace[354966435] 'agreement among raft nodes before linearized reading'  (duration: 169.976661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027617Z","caller":"traceutil/trace.go:172","msg":"trace[182557049] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1761; }","duration":"175.595316ms","start":"2025-11-01T08:52:17.852012Z","end":"2025-11-01T08:52:18.027607Z","steps":["trace[182557049] 'process raft request'  (duration: 175.503416ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.484737Z","caller":"traceutil/trace.go:172","msg":"trace[1326759402] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1904; }","duration":"340.503004ms","start":"2025-11-01T08:52:23.144214Z","end":"2025-11-01T08:52:23.484717Z","steps":["trace[1326759402] 'read index received'  (duration: 340.496208ms)","trace[1326759402] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:23.485008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.771395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2025-11-01T08:52:23.485058Z","caller":"traceutil/trace.go:172","msg":"trace[1039449345] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1790; }","duration":"340.841883ms","start":"2025-11-01T08:52:23.144209Z","end":"2025-11-01T08:52:23.485051Z","steps":["trace[1039449345] 'agreement among raft nodes before linearized reading'  (duration: 340.62868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485106Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.144193Z","time spent":"340.902265ms","remote":"127.0.0.1:36552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T08:52:23.485553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.574901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:23.485588Z","caller":"traceutil/trace.go:172","msg":"trace[1585287071] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1791; }","duration":"287.617514ms","start":"2025-11-01T08:52:23.197963Z","end":"2025-11-01T08:52:23.485581Z","steps":["trace[1585287071] 'agreement among raft nodes before linearized reading'  (duration: 287.549253ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.485660Z","caller":"traceutil/trace.go:172","msg":"trace[1103263823] transaction","detail":"{read_only:false; response_revision:1791; number_of_response:1; }","duration":"361.459988ms","start":"2025-11-01T08:52:23.124191Z","end":"2025-11-01T08:52:23.485651Z","steps":["trace[1103263823] 'process raft request'  (duration: 361.180443ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485795Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.124175Z","time spent":"361.507625ms","remote":"127.0.0.1:36760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-11-01T08:55:13.580313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1434}
	{"level":"info","ts":"2025-11-01T08:55:13.648379Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1434,"took":"67.304726ms","hash":2547452093,"current-db-size-bytes":5730304,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-11-01T08:55:13.648498Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2547452093,"revision":1434,"compact-revision":-1}
	
	
	==> kernel <==
	 08:58:13 up 13 min,  0 users,  load average: 0.25, 0.43, 0.39
	Linux addons-994396 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25] <==
	W1101 08:46:31.751759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.751828       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:46:31.751848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:46:31.752853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.752966       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:46:31.753020       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:48:03.292013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	W1101 08:48:03.296407       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:48:03.296747       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:48:03.297742       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	E1101 08:48:03.298496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	I1101 08:48:03.353240       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:52:03.525330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42910: use of closed network connection
	E1101 08:52:03.723785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42940: use of closed network connection
	I1101 08:52:12.984624       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.226.149"}
	I1101 08:53:04.341444       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 08:55:15.302985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 08:56:08.891135       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:56:09.140799       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.237.168"}
	
	
	==> kube-controller-manager [35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8] <==
	E1101 08:46:22.433268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:22.496038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:46:52.438789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:52.504482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:22.446493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:22.515370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:52.452536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:52.535721       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:52:17.008825       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 08:52:35.860282       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E1101 08:54:57.714310       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.738576       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.766801       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.805443       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.865423       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.962606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.138236       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.477214       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:59.131849       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:00.430311       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:03.008821       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:08.147281       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:18.405556       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:27.269224       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I1101 08:56:13.507559       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9] <==
	I1101 08:45:24.962819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:45:25.066839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:45:25.068064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1101 08:45:25.073313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:45:25.410848       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:45:25.410962       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:45:25.410991       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:45:25.477946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:45:25.478244       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:45:25.478277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:45:25.484125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:45:25.484405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:45:25.491275       1 config.go:200] "Starting service config controller"
	I1101 08:45:25.491309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:45:25.494813       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:45:25.496161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:45:25.495379       1 config.go:309] "Starting node config controller"
	I1101 08:45:25.506423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:45:25.506433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:45:25.584706       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:45:25.592170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:45:25.598016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5] <==
	E1101 08:45:15.349464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:15.349542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:15.349728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:45:15.349881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:45:15.352076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:15.352119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:15.352139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:15.352358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:45:15.352409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:15.357367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:15.357513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:15.357652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:45:16.203110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:45:16.263373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:16.299073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:16.424658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:16.486112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:16.556670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:16.568573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:16.598275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:45:16.651957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:16.662617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:45:16.674245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:16.759792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:45:19.143863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:57:26 addons-994396 kubelet[1497]: E1101 08:57:26.819390    1497 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 01 08:57:26 addons-994396 kubelet[1497]: E1101 08:57:26.819597    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(9c49ac5d-18e5-470b-9217-c0a58f0636a1): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:57:26 addons-994396 kubelet[1497]: E1101 08:57:26.819665    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9c49ac5d-18e5-470b-9217-c0a58f0636a1"
	Nov 01 08:57:27 addons-994396 kubelet[1497]: E1101 08:57:27.194935    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9c49ac5d-18e5-470b-9217-c0a58f0636a1"
	Nov 01 08:57:28 addons-994396 kubelet[1497]: E1101 08:57:28.400487    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987448400105057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:28 addons-994396 kubelet[1497]: E1101 08:57:28.400563    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987448400105057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:33 addons-994396 kubelet[1497]: E1101 08:57:33.874519    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/registry-creds-764b6fb674-xstzf" podUID="75cdadc5-e3ea-4aae-9002-6dca21e0f758"
	Nov 01 08:57:34 addons-994396 kubelet[1497]: I1101 08:57:34.298809    1497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzpmc\" (UniqueName: \"kubernetes.io/projected/75cdadc5-e3ea-4aae-9002-6dca21e0f758-kube-api-access-nzpmc\") pod \"75cdadc5-e3ea-4aae-9002-6dca21e0f758\" (UID: \"75cdadc5-e3ea-4aae-9002-6dca21e0f758\") "
	Nov 01 08:57:34 addons-994396 kubelet[1497]: I1101 08:57:34.304103    1497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cdadc5-e3ea-4aae-9002-6dca21e0f758-kube-api-access-nzpmc" (OuterVolumeSpecName: "kube-api-access-nzpmc") pod "75cdadc5-e3ea-4aae-9002-6dca21e0f758" (UID: "75cdadc5-e3ea-4aae-9002-6dca21e0f758"). InnerVolumeSpecName "kube-api-access-nzpmc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 08:57:34 addons-994396 kubelet[1497]: I1101 08:57:34.399791    1497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nzpmc\" (UniqueName: \"kubernetes.io/projected/75cdadc5-e3ea-4aae-9002-6dca21e0f758-kube-api-access-nzpmc\") on node \"addons-994396\" DevicePath \"\""
	Nov 01 08:57:35 addons-994396 kubelet[1497]: I1101 08:57:35.407281    1497 reconciler_common.go:299] "Volume detached for volume \"gcr-creds\" (UniqueName: \"kubernetes.io/secret/75cdadc5-e3ea-4aae-9002-6dca21e0f758-gcr-creds\") on node \"addons-994396\" DevicePath \"\""
	Nov 01 08:57:35 addons-994396 kubelet[1497]: I1101 08:57:35.974309    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75cdadc5-e3ea-4aae-9002-6dca21e0f758" path="/var/lib/kubelet/pods/75cdadc5-e3ea-4aae-9002-6dca21e0f758/volumes"
	Nov 01 08:57:38 addons-994396 kubelet[1497]: E1101 08:57:38.403834    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987458403105409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:38 addons-994396 kubelet[1497]: E1101 08:57:38.403880    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987458403105409  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:48 addons-994396 kubelet[1497]: E1101 08:57:48.407519    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987468407054671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:48 addons-994396 kubelet[1497]: E1101 08:57:48.407547    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987468407054671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:53 addons-994396 kubelet[1497]: I1101 08:57:53.969689    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bzs78" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:57:58 addons-994396 kubelet[1497]: E1101 08:57:58.410365    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987478409873164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:58 addons-994396 kubelet[1497]: E1101 08:57:58.410396    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987478409873164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:08 addons-994396 kubelet[1497]: E1101 08:58:08.413554    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987488413141105  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:08 addons-994396 kubelet[1497]: E1101 08:58:08.413603    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987488413141105  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006157    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006251    1497 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006543    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(8623da74-791e-4fd6-a974-60ebca5738a7): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006593    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	
	
	==> storage-provisioner [2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3] <==
	W1101 08:57:49.344744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:51.348095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:51.355214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:53.359218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:53.364681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:55.369492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:55.374337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:57.378845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:57.396943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:59.402101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:59.411156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:01.420656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:01.426301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:03.431316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:03.440301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:05.450117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:05.460140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:07.467197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:07.473843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:09.484729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:09.492356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:11.498564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:11.503820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:13.509469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:13.517545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
helpers_test.go:269: (dbg) Run:  kubectl --context addons-994396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1 (98.458458ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:56:09 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlw58 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rlw58:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m6s                default-scheduler  Successfully assigned default/nginx to addons-994396
	  Warning  Failed     49s                 kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    48s                 kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     48s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    36s (x2 over 2m6s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mngk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m31s                default-scheduler  Successfully assigned default/task-pv-pod to addons-994396
	  Warning  Failed     4m49s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    97s (x2 over 4m49s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     97s (x2 over 4m49s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    83s (x3 over 5m31s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4s (x3 over 4m49s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4s (x2 over 109s)    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65r97 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-65r97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6ptqs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dmt9r" not found
	Error from server (NotFound): pods "registry-6b586f9694-b4ph6" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (363.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-994396 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-994396 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-994396 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9c49ac5d-18e5-470b-9217-c0a58f0636a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-11-01 09:04:09.414031392 +0000 UTC m=+1184.440708628
addons_test.go:252: (dbg) Run:  kubectl --context addons-994396 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-994396 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-994396/192.168.39.195
Start Time:       Sat, 01 Nov 2025 08:56:09 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlw58 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rlw58:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m                    default-scheduler  Successfully assigned default/nginx to addons-994396
Warning  Failed     3m58s                 kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     101s (x3 over 6m43s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     101s (x4 over 6m43s)  kubelet            Error: ErrImagePull
Normal   BackOff    26s (x10 over 6m42s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     26s (x10 over 6m42s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    14s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-994396 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-994396 logs nginx -n default: exit status 1 (72.047898ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-994396 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-994396 -n addons-994396
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 logs -n 25: (1.46288361s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ -p binary-mirror-775538                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ start   │ -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:51 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ enable headlamp -p addons-994396 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:54 UTC │ 01 Nov 25 08:56 UTC │
	│ addons  │ addons-994396 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                         │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-994396 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-994396 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:38.415244  535088 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:38.415511  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415520  535088 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:38.415525  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415722  535088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:38.416292  535088 out.go:368] Setting JSON to false
	I1101 08:44:38.417206  535088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62800,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:38.417275  535088 start.go:143] virtualization: kvm guest
	I1101 08:44:38.419180  535088 out.go:179] * [addons-994396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:38.420576  535088 notify.go:221] Checking for updates...
	I1101 08:44:38.420602  535088 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:44:38.422388  535088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:38.423762  535088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:38.425054  535088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:38.426433  535088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:44:38.427613  535088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:44:38.429086  535088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:38.459669  535088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:44:38.460716  535088 start.go:309] selected driver: kvm2
	I1101 08:44:38.460736  535088 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:38.460750  535088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:44:38.461509  535088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:38.461750  535088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:44:38.461788  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:44:38.461839  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:38.461847  535088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:38.461887  535088 start.go:353] cluster config:
	{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:44:38.462012  535088 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:38.463350  535088 out.go:179] * Starting "addons-994396" primary control-plane node in "addons-994396" cluster
	I1101 08:44:38.464523  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:44:38.464559  535088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:38.464570  535088 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:38.464648  535088 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:44:38.464659  535088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:44:38.464982  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:38.465015  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json: {Name:mk89a75531523cc17e10cf65ac144e466baef6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:44:38.465175  535088 start.go:360] acquireMachinesLock for addons-994396: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:44:38.465227  535088 start.go:364] duration metric: took 38.791µs to acquireMachinesLock for "addons-994396"
	I1101 08:44:38.465244  535088 start.go:93] Provisioning new machine with config: &{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:44:38.465309  535088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:44:38.467651  535088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:44:38.467824  535088 start.go:159] libmachine.API.Create for "addons-994396" (driver="kvm2")
	I1101 08:44:38.467852  535088 client.go:173] LocalClient.Create starting
	I1101 08:44:38.467960  535088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 08:44:38.525135  535088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 08:44:38.966403  535088 main.go:143] libmachine: creating domain...
	I1101 08:44:38.966427  535088 main.go:143] libmachine: creating network...
	I1101 08:44:38.968049  535088 main.go:143] libmachine: found existing default network
	I1101 08:44:38.968268  535088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.968754  535088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9b7d0}
	I1101 08:44:38.968919  535088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-994396</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.974811  535088 main.go:143] libmachine: creating private network mk-addons-994396 192.168.39.0/24...
	I1101 08:44:39.051181  535088 main.go:143] libmachine: private network mk-addons-994396 192.168.39.0/24 created
	I1101 08:44:39.051459  535088 main.go:143] libmachine: <network>
	  <name>mk-addons-994396</name>
	  <uuid>960ab3a9-e2ba-413f-8b77-ff4745b036d0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3e:a3:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:39.051486  535088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.051511  535088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:39.051536  535088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.051601  535088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:44:39.334278  535088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa...
	I1101 08:44:39.562590  535088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk...
	I1101 08:44:39.562642  535088 main.go:143] libmachine: Writing magic tar header
	I1101 08:44:39.562674  535088 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:44:39.562773  535088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.562837  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396
	I1101 08:44:39.562920  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 (perms=drwx------)
	I1101 08:44:39.562944  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 08:44:39.562958  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:44:39.562977  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.562988  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 08:44:39.562999  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 08:44:39.563010  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 08:44:39.563022  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:44:39.563032  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:44:39.563043  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:44:39.563053  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:44:39.563063  535088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:44:39.563072  535088 main.go:143] libmachine: skipping /home - not owner
	I1101 08:44:39.563079  535088 main.go:143] libmachine: defining domain...
	I1101 08:44:39.564528  535088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:39.569846  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:73:0a:92 in network default
	I1101 08:44:39.570479  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:39.570497  535088 main.go:143] libmachine: starting domain...
	I1101 08:44:39.570501  535088 main.go:143] libmachine: ensuring networks are active...
	I1101 08:44:39.571361  535088 main.go:143] libmachine: Ensuring network default is active
	I1101 08:44:39.571760  535088 main.go:143] libmachine: Ensuring network mk-addons-994396 is active
	I1101 08:44:39.572463  535088 main.go:143] libmachine: getting domain XML...
	I1101 08:44:39.574016  535088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <uuid>47158355-a959-4cbf-84ea-23a10000597a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2a:d2:e3'/>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:73:0a:92'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:40.850976  535088 main.go:143] libmachine: waiting for domain to start...
	I1101 08:44:40.852401  535088 main.go:143] libmachine: domain is now running
	I1101 08:44:40.852417  535088 main.go:143] libmachine: waiting for IP...
	I1101 08:44:40.853195  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:40.853985  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:40.853994  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:40.854261  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:40.854309  535088 retry.go:31] will retry after 216.262446ms: waiting for domain to come up
	I1101 08:44:41.071837  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.072843  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.072862  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.073274  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.073319  535088 retry.go:31] will retry after 360.302211ms: waiting for domain to come up
	I1101 08:44:41.434879  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.435804  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.435822  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.436172  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.436214  535088 retry.go:31] will retry after 371.777554ms: waiting for domain to come up
	I1101 08:44:41.809947  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.810703  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.810722  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.811072  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.811112  535088 retry.go:31] will retry after 462.843758ms: waiting for domain to come up
	I1101 08:44:42.275984  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.276618  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.276637  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.276993  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.277037  535088 retry.go:31] will retry after 560.265466ms: waiting for domain to come up
	I1101 08:44:42.838931  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.839781  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.839798  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.840224  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.840268  535088 retry.go:31] will retry after 839.411139ms: waiting for domain to come up
	I1101 08:44:43.681040  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:43.681790  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:43.681802  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:43.682192  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:43.682243  535088 retry.go:31] will retry after 1.099878288s: waiting for domain to come up
	I1101 08:44:44.783686  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:44.784502  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:44.784521  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:44.784840  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:44.784888  535088 retry.go:31] will retry after 1.052374717s: waiting for domain to come up
	I1101 08:44:45.839257  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:45.839889  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:45.839926  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:45.840243  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:45.840284  535088 retry.go:31] will retry after 1.704542625s: waiting for domain to come up
	I1101 08:44:47.547411  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:47.548205  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:47.548225  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:47.548588  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:47.548630  535088 retry.go:31] will retry after 1.752267255s: waiting for domain to come up
	I1101 08:44:49.302359  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:49.303199  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:49.303210  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:49.303522  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:49.303559  535088 retry.go:31] will retry after 2.861627149s: waiting for domain to come up
	I1101 08:44:52.168696  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:52.169368  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:52.169385  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:52.169681  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:52.169738  535088 retry.go:31] will retry after 2.277819072s: waiting for domain to come up
	I1101 08:44:54.449193  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:54.449957  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:54.449978  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:54.450273  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:54.450316  535088 retry.go:31] will retry after 3.87405165s: waiting for domain to come up
	I1101 08:44:58.329388  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330073  535088 main.go:143] libmachine: domain addons-994396 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330089  535088 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1101 08:44:58.330096  535088 main.go:143] libmachine: reserving static IP address...
	I1101 08:44:58.330490  535088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-994396", mac: "52:54:00:2a:d2:e3", ip: "192.168.39.195"} in network mk-addons-994396
	I1101 08:44:58.532247  535088 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-994396
	I1101 08:44:58.532270  535088 main.go:143] libmachine: waiting for SSH...
	I1101 08:44:58.532276  535088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:44:58.535646  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536214  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.536242  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536445  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.536737  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.536748  535088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:44:58.655800  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:58.656194  535088 main.go:143] libmachine: domain creation complete
	I1101 08:44:58.657668  535088 machine.go:94] provisionDockerMachine start ...
	I1101 08:44:58.660444  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.660857  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.660881  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.661078  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.661273  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.661283  535088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:44:58.781217  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:44:58.781253  535088 buildroot.go:166] provisioning hostname "addons-994396"
	I1101 08:44:58.784387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784787  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.784821  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784992  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.785186  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.785198  535088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-994396 && echo "addons-994396" | sudo tee /etc/hostname
	I1101 08:44:58.921865  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-994396
	
	I1101 08:44:58.924651  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925106  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.925158  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925363  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.925623  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.925647  535088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-994396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-994396/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-994396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:44:59.053021  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:59.053062  535088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 08:44:59.053121  535088 buildroot.go:174] setting up certificates
	I1101 08:44:59.053134  535088 provision.go:84] configureAuth start
	I1101 08:44:59.056039  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.056491  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.056527  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059390  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059768  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.059793  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059971  535088 provision.go:143] copyHostCerts
	I1101 08:44:59.060039  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 08:44:59.060157  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 08:44:59.060215  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 08:44:59.060262  535088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.addons-994396 san=[127.0.0.1 192.168.39.195 addons-994396 localhost minikube]
	I1101 08:44:59.098818  535088 provision.go:177] copyRemoteCerts
	I1101 08:44:59.098909  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:44:59.101492  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.101853  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.101876  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.102044  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.192919  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:44:59.224374  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:44:59.254587  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:44:59.285112  535088 provision.go:87] duration metric: took 231.963697ms to configureAuth
	I1101 08:44:59.285151  535088 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:44:59.285333  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:44:59.288033  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288440  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.288461  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288660  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.288854  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.288872  535088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:44:59.552498  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:44:59.552535  535088 machine.go:97] duration metric: took 894.848438ms to provisionDockerMachine
	I1101 08:44:59.552551  535088 client.go:176] duration metric: took 21.084691653s to LocalClient.Create
	I1101 08:44:59.552575  535088 start.go:167] duration metric: took 21.084749844s to libmachine.API.Create "addons-994396"
	I1101 08:44:59.552585  535088 start.go:293] postStartSetup for "addons-994396" (driver="kvm2")
	I1101 08:44:59.552598  535088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:44:59.552698  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:44:59.555985  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556410  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.556446  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556594  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.646378  535088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:44:59.651827  535088 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:44:59.651860  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 08:44:59.652002  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 08:44:59.652045  535088 start.go:296] duration metric: took 99.451778ms for postStartSetup
	I1101 08:44:59.655428  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.655951  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.655983  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.656303  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:59.656524  535088 start.go:128] duration metric: took 21.191204758s to createHost
	I1101 08:44:59.659225  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659662  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.659688  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659918  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.660165  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.660179  535088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:44:59.778959  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761986699.744832808
	
	I1101 08:44:59.778992  535088 fix.go:216] guest clock: 1761986699.744832808
	I1101 08:44:59.779003  535088 fix.go:229] Guest: 2025-11-01 08:44:59.744832808 +0000 UTC Remote: 2025-11-01 08:44:59.656538269 +0000 UTC m=+21.291332648 (delta=88.294539ms)
	I1101 08:44:59.779025  535088 fix.go:200] guest clock delta is within tolerance: 88.294539ms
	I1101 08:44:59.779033  535088 start.go:83] releasing machines lock for "addons-994396", held for 21.31379566s
	I1101 08:44:59.782561  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783052  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.783085  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783744  535088 ssh_runner.go:195] Run: cat /version.json
	I1101 08:44:59.783923  535088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:44:59.786949  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787338  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.787364  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787467  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787547  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.788054  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.788100  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.788306  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.898855  535088 ssh_runner.go:195] Run: systemctl --version
	I1101 08:44:59.905749  535088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:45:00.064091  535088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:45:00.072201  535088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:45:00.072263  535088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:45:00.092562  535088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:45:00.092584  535088 start.go:496] detecting cgroup driver to use...
	I1101 08:45:00.092661  535088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:45:00.112010  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:45:00.129164  535088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:45:00.129222  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:45:00.147169  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:45:00.164876  535088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:45:00.317011  535088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:45:00.521291  535088 docker.go:234] disabling docker service ...
	I1101 08:45:00.521377  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:45:00.537927  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:45:00.552544  535088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:45:00.714401  535088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:45:00.855387  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:45:00.871802  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:45:00.895848  535088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:45:00.895969  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.908735  535088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:45:00.908831  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.924244  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.938467  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.951396  535088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:45:00.965054  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.977595  535088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.998868  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:01.011547  535088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:45:01.022709  535088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:45:01.022775  535088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:45:01.044963  535088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:45:01.057499  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:01.203336  535088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:45:01.311792  535088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:45:01.311884  535088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:45:01.317453  535088 start.go:564] Will wait 60s for crictl version
	I1101 08:45:01.317538  535088 ssh_runner.go:195] Run: which crictl
	I1101 08:45:01.321986  535088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:45:01.367266  535088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:45:01.367363  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.398127  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.431424  535088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:45:01.435939  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436441  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:01.436471  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436732  535088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:45:01.441662  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:01.457635  535088 kubeadm.go:884] updating cluster {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:45:01.457753  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:45:01.457802  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:01.495090  535088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:45:01.495193  535088 ssh_runner.go:195] Run: which lz4
	I1101 08:45:01.500348  535088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:45:01.506036  535088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:45:01.506082  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:45:03.083875  535088 crio.go:462] duration metric: took 1.583585669s to copy over tarball
	I1101 08:45:03.084036  535088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:45:04.665932  535088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.581842537s)
	I1101 08:45:04.665965  535088 crio.go:469] duration metric: took 1.582007439s to extract the tarball
	I1101 08:45:04.665976  535088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:45:04.707682  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:04.751036  535088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:45:04.751073  535088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:45:04.751085  535088 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1101 08:45:04.751212  535088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-994396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:45:04.751302  535088 ssh_runner.go:195] Run: crio config
	I1101 08:45:04.801702  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:04.801733  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:04.801758  535088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:45:04.801791  535088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-994396 NodeName:addons-994396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:45:04.801978  535088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-994396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:45:04.802066  535088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:45:04.814571  535088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:45:04.814653  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:45:04.826605  535088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:45:04.846937  535088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:45:04.868213  535088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:45:04.888962  535088 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1101 08:45:04.893299  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:04.908547  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:05.049704  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:05.081089  535088 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396 for IP: 192.168.39.195
	I1101 08:45:05.081124  535088 certs.go:195] generating shared ca certs ...
	I1101 08:45:05.081146  535088 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.081312  535088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 08:45:05.135626  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt ...
	I1101 08:45:05.135669  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt: {Name:mk42d9a91568201fc7bb838317bb109a9d557e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.135920  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key ...
	I1101 08:45:05.135935  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key: {Name:mk8868035ca874da4b6bcd8361c76f97522a09dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.136031  535088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 08:45:05.223112  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt ...
	I1101 08:45:05.223159  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt: {Name:mk17c24c1e5b8188202459729e4a5c2f9a4008a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223343  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key ...
	I1101 08:45:05.223356  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key: {Name:mk64bb220f00b339bafb0b18442258c31c6af7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223432  535088 certs.go:257] generating profile certs ...
	I1101 08:45:05.223509  535088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key
	I1101 08:45:05.223524  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt with IP's: []
	I1101 08:45:05.791770  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt ...
	I1101 08:45:05.791805  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: {Name:mk739df015c10897beee55b57aac6a9687c49aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.791993  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key ...
	I1101 08:45:05.792008  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key: {Name:mk22e303787fbf3b8945b47ac917db338129138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.792086  535088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58
	I1101 08:45:05.792105  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1101 08:45:05.964688  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 ...
	I1101 08:45:05.964721  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58: {Name:mkc85c65639cbe37cb2f18c20238504fe651c568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964892  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 ...
	I1101 08:45:05.964917  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58: {Name:mk0a07f1288d6c9ced8ef2d4bb53cbfce6f3c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964998  535088 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt
	I1101 08:45:05.965075  535088 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key
	I1101 08:45:05.965124  535088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key
	I1101 08:45:05.965142  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt with IP's: []
	I1101 08:45:06.097161  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt ...
	I1101 08:45:06.097197  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt: {Name:mke456d45c85355b327c605777e7e939bd178f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097374  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key ...
	I1101 08:45:06.097388  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key: {Name:mk96b8f9598bf40057b4d6b2c6e97a30a363b3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097558  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:45:06.097602  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:45:06.097627  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:45:06.097651  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 08:45:06.098363  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:45:06.130486  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:45:06.160429  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:45:06.189962  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:45:06.219452  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:45:06.250552  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:45:06.282860  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:45:06.313986  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:45:06.344383  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:45:06.376611  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:45:06.399751  535088 ssh_runner.go:195] Run: openssl version
	I1101 08:45:06.406933  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:45:06.421716  535088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427410  535088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427478  535088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.435363  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:45:06.449854  535088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:45:06.455299  535088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:45:06.455368  535088 kubeadm.go:401] StartCluster: {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:45:06.455464  535088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:45:06.455528  535088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:45:06.499318  535088 cri.go:89] found id: ""
	I1101 08:45:06.499395  535088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:45:06.513696  535088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:45:06.527370  535088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:45:06.541099  535088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:45:06.541122  535088 kubeadm.go:158] found existing configuration files:
	
	I1101 08:45:06.541170  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:45:06.553610  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:45:06.553677  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:45:06.567384  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:45:06.580377  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:45:06.580444  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:45:06.593440  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.605393  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:45:06.605460  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.618978  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:45:06.631411  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:45:06.631487  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:45:06.645452  535088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:45:06.719122  535088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:45:06.719190  535088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:45:06.829004  535088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:45:06.829160  535088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:45:06.829291  535088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:45:06.841691  535088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:45:06.866137  535088 out.go:252]   - Generating certificates and keys ...
	I1101 08:45:06.866269  535088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:45:06.866364  535088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:45:07.164883  535088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:45:07.767615  535088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:45:08.072088  535088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:45:08.514870  535088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:45:08.646331  535088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:45:08.646504  535088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.781122  535088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:45:08.781335  535088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.899420  535088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:45:09.007181  535088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:45:09.224150  535088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:45:09.224224  535088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:45:09.511033  535088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:45:09.752693  535088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:45:09.819463  535088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:45:10.005082  535088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:45:10.463552  535088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:45:10.464025  535088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:45:10.466454  535088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:45:10.471575  535088 out.go:252]   - Booting up control plane ...
	I1101 08:45:10.471714  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:45:10.471809  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:45:10.471913  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:45:10.490781  535088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:45:10.491002  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:45:10.498306  535088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:45:10.498812  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:45:10.498893  535088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:45:10.686796  535088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:45:10.686991  535088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:45:11.697343  535088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.005207328s
	I1101 08:45:11.699752  535088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:45:11.699949  535088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1101 08:45:11.700150  535088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:45:11.704134  535088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:45:13.981077  535088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.280860487s
	I1101 08:45:15.371368  535088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.67283221s
	I1101 08:45:17.198417  535088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501722237s
	I1101 08:45:17.211581  535088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:45:17.231075  535088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:45:17.253882  535088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:45:17.254137  535088 kubeadm.go:319] [mark-control-plane] Marking the node addons-994396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:45:17.268868  535088 kubeadm.go:319] [bootstrap-token] Using token: f9fr0l.j77e5jevkskl9xb5
	I1101 08:45:17.270121  535088 out.go:252]   - Configuring RBAC rules ...
	I1101 08:45:17.270326  535088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:45:17.277792  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:45:17.293695  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:45:17.296955  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:45:17.300284  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:45:17.303890  535088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:45:17.605222  535088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:45:18.065761  535088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:45:18.604676  535088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:45:18.605674  535088 kubeadm.go:319] 
	I1101 08:45:18.605802  535088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:45:18.605830  535088 kubeadm.go:319] 
	I1101 08:45:18.605992  535088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:45:18.606023  535088 kubeadm.go:319] 
	I1101 08:45:18.606068  535088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:45:18.606156  535088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:45:18.606234  535088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:45:18.606243  535088 kubeadm.go:319] 
	I1101 08:45:18.606321  535088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:45:18.606330  535088 kubeadm.go:319] 
	I1101 08:45:18.606402  535088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:45:18.606415  535088 kubeadm.go:319] 
	I1101 08:45:18.606489  535088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:45:18.606605  535088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:45:18.606702  535088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:45:18.606712  535088 kubeadm.go:319] 
	I1101 08:45:18.606815  535088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:45:18.606947  535088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:45:18.606965  535088 kubeadm.go:319] 
	I1101 08:45:18.607067  535088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607196  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 08:45:18.607233  535088 kubeadm.go:319] 	--control-plane 
	I1101 08:45:18.607244  535088 kubeadm.go:319] 
	I1101 08:45:18.607366  535088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:45:18.607389  535088 kubeadm.go:319] 
	I1101 08:45:18.607497  535088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607642  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 08:45:18.609590  535088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:45:18.609615  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:18.609625  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:18.611467  535088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:45:18.612559  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:45:18.629659  535088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:45:18.653188  535088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:45:18.653266  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:18.653283  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-994396 minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-994396 minikube.k8s.io/primary=true
	I1101 08:45:18.823964  535088 ops.go:34] apiserver oom_adj: -16
	I1101 08:45:18.824003  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.324429  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.824169  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.324357  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.825065  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.324643  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.824929  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.325055  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.824179  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.324346  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.422037  535088 kubeadm.go:1114] duration metric: took 4.768840437s to wait for elevateKubeSystemPrivileges
	I1101 08:45:23.422092  535088 kubeadm.go:403] duration metric: took 16.966730014s to StartCluster
	I1101 08:45:23.422117  535088 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.422289  535088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:45:23.422848  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.423145  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:45:23.423170  535088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:45:23.423239  535088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:45:23.423378  535088 addons.go:70] Setting yakd=true in profile "addons-994396"
	I1101 08:45:23.423402  535088 addons.go:239] Setting addon yakd=true in "addons-994396"
	I1101 08:45:23.423420  535088 addons.go:70] Setting inspektor-gadget=true in profile "addons-994396"
	I1101 08:45:23.423440  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.423457  535088 addons.go:239] Setting addon inspektor-gadget=true in "addons-994396"
	I1101 08:45:23.423459  535088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423473  535088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-994396"
	I1101 08:45:23.423435  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423491  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423507  535088 addons.go:70] Setting registry=true in profile "addons-994396"
	I1101 08:45:23.423518  535088 addons.go:239] Setting addon registry=true in "addons-994396"
	I1101 08:45:23.423522  535088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423539  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423555  535088 addons.go:70] Setting cloud-spanner=true in profile "addons-994396"
	I1101 08:45:23.423568  535088 addons.go:239] Setting addon cloud-spanner=true in "addons-994396"
	I1101 08:45:23.423606  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423731  535088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-994396"
	I1101 08:45:23.423760  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-994396"
	I1101 08:45:23.424125  535088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-994396"
	I1101 08:45:23.424214  535088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:23.424248  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423443  535088 addons.go:70] Setting metrics-server=true in profile "addons-994396"
	I1101 08:45:23.424283  535088 addons.go:239] Setting addon metrics-server=true in "addons-994396"
	I1101 08:45:23.424313  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423545  535088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-994396"
	I1101 08:45:23.424411  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424496  535088 addons.go:70] Setting ingress=true in profile "addons-994396"
	I1101 08:45:23.423498  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424512  535088 addons.go:239] Setting addon ingress=true in "addons-994396"
	I1101 08:45:23.424544  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425045  535088 addons.go:70] Setting registry-creds=true in profile "addons-994396"
	I1101 08:45:23.425074  535088 addons.go:239] Setting addon registry-creds=true in "addons-994396"
	I1101 08:45:23.425105  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425174  535088 addons.go:70] Setting volcano=true in profile "addons-994396"
	I1101 08:45:23.425210  535088 addons.go:239] Setting addon volcano=true in "addons-994396"
	I1101 08:45:23.425245  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423474  535088 addons.go:70] Setting default-storageclass=true in profile "addons-994396"
	I1101 08:45:23.425528  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-994396"
	I1101 08:45:23.425555  535088 addons.go:70] Setting gcp-auth=true in profile "addons-994396"
	I1101 08:45:23.425587  535088 addons.go:70] Setting volumesnapshots=true in profile "addons-994396"
	I1101 08:45:23.425594  535088 mustload.go:66] Loading cluster: addons-994396
	I1101 08:45:23.425605  535088 addons.go:239] Setting addon volumesnapshots=true in "addons-994396"
	I1101 08:45:23.425629  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425759  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.426001  535088 addons.go:70] Setting storage-provisioner=true in profile "addons-994396"
	I1101 08:45:23.426034  535088 addons.go:239] Setting addon storage-provisioner=true in "addons-994396"
	I1101 08:45:23.426060  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.426263  535088 addons.go:70] Setting ingress-dns=true in profile "addons-994396"
	I1101 08:45:23.426312  535088 addons.go:239] Setting addon ingress-dns=true in "addons-994396"
	I1101 08:45:23.426349  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.428071  535088 out.go:179] * Verifying Kubernetes components...
	I1101 08:45:23.430376  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:23.432110  535088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:45:23.432211  535088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:45:23.432239  535088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:45:23.432548  535088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-994396"
	I1101 08:45:23.433347  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.433599  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:45:23.433622  535088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:45:23.434372  535088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:45:23.434372  535088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:45:23.434372  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:45:23.434399  535088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	W1101 08:45:23.434936  535088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:45:23.434947  535088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:45:23.434397  535088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:45:23.435739  535088 addons.go:239] Setting addon default-storageclass=true in "addons-994396"
	I1101 08:45:23.435133  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.435780  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:45:23.435569  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.436246  535088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:45:23.436291  535088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:23.437459  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:45:23.436270  535088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:23.437541  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:45:23.437032  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:45:23.437636  535088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:45:23.437844  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:45:23.437918  535088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:45:23.437851  535088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:45:23.437941  535088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:23.438856  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:45:23.437976  535088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:23.438988  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:45:23.439032  535088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:45:23.439073  535088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:45:23.439539  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:45:23.439090  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:45:23.439094  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:45:23.439317  535088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:23.439929  535088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:45:23.439932  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:45:23.439957  535088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:45:23.439990  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:23.440001  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:45:23.440144  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:23.440159  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:45:23.442297  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.442308  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:45:23.442298  535088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:45:23.443272  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443791  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443933  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:23.443957  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:45:23.444059  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:23.444083  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:45:23.444856  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.444941  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.445160  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:45:23.445705  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.446038  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.446083  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.446929  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.448105  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:45:23.448713  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.449090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450028  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450296  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450327  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450341  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450369  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450600  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:45:23.451017  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451085  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451162  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451241  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451823  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.451855  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452155  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452274  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452437  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.452519  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452542  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452550  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452567  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452769  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453008  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453181  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453204  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453341  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:45:23.453485  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453526  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453547  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453582  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453698  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453748  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453776  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453961  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454247  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454637  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454592  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454668  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454765  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454810  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454640  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454828  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454953  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455189  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455476  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455511  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455565  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455603  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455714  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455949  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:45:23.456005  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.457369  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:45:23.457390  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:45:23.460387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.460852  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.460874  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.461072  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	W1101 08:45:23.763758  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763807  535088 retry.go:31] will retry after 294.020846ms: ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	W1101 08:45:23.763891  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763941  535088 retry.go:31] will retry after 247.932093ms: ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.987612  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:23.987618  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:45:24.391549  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:45:24.391592  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:45:24.396118  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:24.428988  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:45:24.429026  535088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:45:24.539937  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:24.542018  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:24.551067  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:24.578439  535088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:45:24.578476  535088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:45:24.590870  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:24.593597  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:45:24.593630  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:45:24.648891  535088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:24.648945  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:45:24.654530  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:24.691639  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:24.775174  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:24.894476  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:45:24.894518  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:45:25.110719  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:45:25.110755  535088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:45:25.248567  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:45:25.248606  535088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:45:25.251834  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:25.279634  535088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.279661  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:45:25.282613  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:25.356642  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:45:25.356672  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:45:25.596573  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:45:25.596609  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:45:25.610846  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:45:25.610885  535088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:45:25.674735  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.705462  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:25.705495  535088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:45:25.746878  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:45:25.746929  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:45:25.925617  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:25.925645  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:45:25.996036  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:45:25.996070  535088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:45:26.051328  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:26.240447  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:45:26.240483  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:45:26.408185  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:26.436460  535088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:26.436501  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:45:26.557448  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:45:26.557481  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:45:26.856571  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:27.059648  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:45:27.059683  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:45:27.286113  535088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.298454996s)
	I1101 08:45:27.286197  535088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.298476587s)
	I1101 08:45:27.286240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.890088886s)
	I1101 08:45:27.286229  535088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:45:27.286918  535088 node_ready.go:35] waiting up to 6m0s for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312278  535088 node_ready.go:49] node "addons-994396" is "Ready"
	I1101 08:45:27.312325  535088 node_ready.go:38] duration metric: took 25.37676ms for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312346  535088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:45:27.312422  535088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:27.686576  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:45:27.686612  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:45:27.792267  535088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-994396" context rescaled to 1 replicas
	I1101 08:45:28.140990  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:45:28.141032  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:45:28.704311  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:45:28.704352  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:45:29.292401  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:45:29.292429  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:45:29.854708  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:29.854740  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:45:30.288568  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:30.575091  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033025614s)
	I1101 08:45:30.862016  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:45:30.865323  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.865769  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:30.865797  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.866047  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:31.632521  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:45:31.806924  535088 addons.go:239] Setting addon gcp-auth=true in "addons-994396"
	I1101 08:45:31.807015  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:31.809359  535088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:45:31.813090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814762  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:31.814801  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814989  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:33.008057  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.456928918s)
	I1101 08:45:33.008164  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.417239871s)
	I1101 08:45:33.008205  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.35364594s)
	I1101 08:45:33.008240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.316568456s)
	I1101 08:45:33.008302  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.233079465s)
	I1101 08:45:33.008386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.756527935s)
	I1101 08:45:33.008524  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725858558s)
	I1101 08:45:33.008553  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.333786806s)
	W1101 08:45:33.008563  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008566  535088 addons.go:480] Verifying addon registry=true in "addons-994396"
	I1101 08:45:33.008586  535088 retry.go:31] will retry after 241.480923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008638  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.957281467s)
	I1101 08:45:33.008733  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.600492861s)
	I1101 08:45:33.008738  535088 addons.go:480] Verifying addon metrics-server=true in "addons-994396"
	I1101 08:45:33.010227  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.470250108s)
	I1101 08:45:33.010253  535088 addons.go:480] Verifying addon ingress=true in "addons-994396"
	I1101 08:45:33.011210  535088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-994396 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:45:33.011218  535088 out.go:179] * Verifying registry addon...
	I1101 08:45:33.012250  535088 out.go:179] * Verifying ingress addon...
	I1101 08:45:33.014024  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:45:33.015512  535088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:45:33.051723  535088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:45:33.051749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.051812  535088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:45:33.051833  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:45:33.111540  535088 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 08:45:33.250325  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:33.619402  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.619673  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:33.847569  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.990948052s)
	I1101 08:45:33.847595  535088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.535150405s)
	I1101 08:45:33.847621  535088 api_server.go:72] duration metric: took 10.424417181s to wait for apiserver process to appear ...
	I1101 08:45:33.847629  535088 api_server.go:88] waiting for apiserver healthz status ...
	W1101 08:45:33.847626  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.847652  535088 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1101 08:45:33.847651  535088 retry.go:31] will retry after 218.125549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.908865  535088 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1101 08:45:33.910593  535088 api_server.go:141] control plane version: v1.34.1
	I1101 08:45:33.910629  535088 api_server.go:131] duration metric: took 62.993472ms to wait for apiserver health ...
	I1101 08:45:33.910638  535088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:45:33.979264  535088 system_pods.go:59] 17 kube-system pods found
	I1101 08:45:33.979341  535088 system_pods.go:61] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:33.979358  535088 system_pods.go:61] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979373  535088 system_pods.go:61] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979381  535088 system_pods.go:61] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:33.979388  535088 system_pods.go:61] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:33.979401  535088 system_pods.go:61] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:33.979413  535088 system_pods.go:61] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:33.979421  535088 system_pods.go:61] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:33.979431  535088 system_pods.go:61] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:33.979438  535088 system_pods.go:61] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:33.979452  535088 system_pods.go:61] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:33.979468  535088 system_pods.go:61] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:33.979480  535088 system_pods.go:61] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:33.979501  535088 system_pods.go:61] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:33.979512  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending
	I1101 08:45:33.979522  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:33.979531  535088 system_pods.go:61] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:33.979545  535088 system_pods.go:74] duration metric: took 68.899123ms to wait for pod list to return data ...
	I1101 08:45:33.979563  535088 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:45:34.005592  535088 default_sa.go:45] found service account: "default"
	I1101 08:45:34.005620  535088 default_sa.go:55] duration metric: took 26.049347ms for default service account to be created ...
	I1101 08:45:34.005631  535088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:45:34.029039  535088 system_pods.go:86] 17 kube-system pods found
	I1101 08:45:34.029088  535088 system_pods.go:89] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:34.029098  535088 system_pods.go:89] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029109  535088 system_pods.go:89] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029116  535088 system_pods.go:89] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:34.029123  535088 system_pods.go:89] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:34.029128  535088 system_pods.go:89] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:34.029139  535088 system_pods.go:89] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:34.029144  535088 system_pods.go:89] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:34.029150  535088 system_pods.go:89] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:34.029156  535088 system_pods.go:89] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:34.029165  535088 system_pods.go:89] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:34.029173  535088 system_pods.go:89] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:34.029184  535088 system_pods.go:89] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:34.029194  535088 system_pods.go:89] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:34.029202  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:45:34.029211  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:34.029232  535088 system_pods.go:89] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:34.029244  535088 system_pods.go:126] duration metric: took 23.605903ms to wait for k8s-apps to be running ...
	I1101 08:45:34.029259  535088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:45:34.029328  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:34.057589  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.060041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:34.066143  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:34.536703  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.540613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.033279  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.057492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.517382  535088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.707985766s)
	I1101 08:45:35.519009  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:35.519008  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.230381443s)
	I1101 08:45:35.519151  535088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:35.520249  535088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:45:35.521386  535088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:45:35.522322  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:45:35.523075  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:45:35.523091  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:45:35.574185  535088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:45:35.574221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:35.574179  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.589220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.670403  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:45:35.670443  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:45:35.926227  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:35.926260  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:45:36.028744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.029084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.032411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:36.103812  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:36.521069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.523012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.530349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.024569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.026839  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.029801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.202891  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.952517264s)
	W1101 08:45:37.202946  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.202972  535088 retry.go:31] will retry after 301.106324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.203012  535088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.173650122s)
	I1101 08:45:37.203055  535088 system_svc.go:56] duration metric: took 3.173789622s WaitForService to wait for kubelet
	I1101 08:45:37.203071  535088 kubeadm.go:587] duration metric: took 13.779865062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:45:37.203102  535088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:45:37.208388  535088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:45:37.208416  535088 node_conditions.go:123] node cpu capacity is 2
	I1101 08:45:37.208429  535088 node_conditions.go:105] duration metric: took 5.320357ms to run NodePressure ...
	I1101 08:45:37.208441  535088 start.go:242] waiting for startup goroutines ...
	I1101 08:45:37.368099  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.301889566s)
	I1101 08:45:37.504488  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:37.521079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.521246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.528201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.991386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887518439s)
	I1101 08:45:37.992795  535088 addons.go:480] Verifying addon gcp-auth=true in "addons-994396"
	I1101 08:45:37.995595  535088 out.go:179] * Verifying gcp-auth addon...
	I1101 08:45:37.997651  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:45:38.013086  535088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:45:38.013118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.028095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.030768  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.041146  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:38.502928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.520170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.521930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.526766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.004207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.019028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.024223  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.031869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.206009  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.701470957s)
	W1101 08:45:39.206061  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.206085  535088 retry.go:31] will retry after 556.568559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.503999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.527340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.763081  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:40.006287  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.021411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.025825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.028609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:40.507622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.523293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.527164  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.530886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.005619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.021779  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.023058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.028879  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.134842  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371696885s)
	W1101 08:45:41.134889  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.134933  535088 retry.go:31] will retry after 634.404627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.501998  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.519483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.522699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.527571  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.769910  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:42.004958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.021144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.021931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.027195  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.501545  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.519865  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.522754  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.526903  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.775680  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00572246s)
	W1101 08:45:42.775745  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:42.775781  535088 retry.go:31] will retry after 1.084498807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:43.002944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.020356  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.020475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.134004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.504736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.519636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.520489  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.525810  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.861263  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:44.001829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.019292  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.026202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:44.503149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.520624  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.520651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.526211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:45:44.623495  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:44.623540  535088 retry.go:31] will retry after 1.856024944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:45.001600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.020242  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.022140  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.026024  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:45.507084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.523761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.524237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.529475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.005033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.108846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.109151  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.109369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.479732  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:46.503499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.520286  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.526234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.529155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.019094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.023015  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.027997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.507760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.519999  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.524925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.528391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.666049  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186267383s)
	W1101 08:45:47.666140  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:47.666174  535088 retry.go:31] will retry after 4.139204607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:48.003042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.019125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.027235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:48.031596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.722743  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.727291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.727372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.727610  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.004382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.019147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.021814  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.026878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:49.504442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.517916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.520088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.001964  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.024108  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.024120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.029503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.504014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.523676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.527259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.529569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.002796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.022756  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.022985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.026836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.501595  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.523272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.526829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.530749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.806085  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:52.003559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.019381  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.019451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.027431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:52.504756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.522177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.526818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.531367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.001310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.018845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.024989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.029380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.104383  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.298241592s)
	W1101 08:45:53.104437  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.104469  535088 retry.go:31] will retry after 2.354213604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.521260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.521459  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.530531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.465678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.465798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.466036  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:54.466159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.562014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.562454  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.001120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.025479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.025582  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.026324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:55.460012  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:55.504349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.519300  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.521013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.527541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.002846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.025053  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.029411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.032019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.575604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.575734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.577952  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.577981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.753301  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.293228646s)
	W1101 08:45:56.753349  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:56.753376  535088 retry.go:31] will retry after 4.355574242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:57.006174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.021087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.023942  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.029154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:57.505515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.520197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.523156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.018201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.022518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.025296  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.505701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.524023  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.526483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.536508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.001410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.017471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.020442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.025457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.501507  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.519043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.520094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.525760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.001248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.017563  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.020984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.026549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.501281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.519844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.521324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.525700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.001953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.020105  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.020877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.025885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.110059  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:01.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.519377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.523178  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.526440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:01.845885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:01.845957  535088 retry.go:31] will retry after 7.871379914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:02.001335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.019157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.021487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.026236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:02.502141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.517119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.519718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.526453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.002138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.017025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.019806  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.026770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.502833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.520032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.520118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.526559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.064971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.065055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.068066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.068526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.502308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.520197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.521585  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.526046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.003330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.017484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.026496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.501222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.517839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.520724  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.001368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.019614  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.020124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.025568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.500972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.518736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.520211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.526135  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.002092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.018836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.020757  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.025238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.503063  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.517984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.519990  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.528565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.018162  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.020563  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.026357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.501444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.517337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.519389  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.525929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.002578  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.018521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.020246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.026866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.518157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.519720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.527087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.718336  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:10.004096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.021038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.021333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.027767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:10.413712  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.413760  535088 retry.go:31] will retry after 19.114067213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.517730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.520404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.526363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.002849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.019496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.019995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.026025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.501655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.518007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.521219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.525426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.000873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.017867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.020240  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.026060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.502263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.518472  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.526084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.002272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.025249  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.501457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.518992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.520857  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.526486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.000572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.019408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.020492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.025038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.501826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.518060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.520198  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.526075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.002744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.018115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.019636  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.025834  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.501625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.518152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.519669  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.525079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.001990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.021114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.022918  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.025425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.501061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.519200  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.519212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.525882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.002326  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.017673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.020197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.026945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.502364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.518476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.520804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.526128  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.004541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.020439  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.028122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.502479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.519387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.519499  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.003038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.019735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.020844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.027661  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.519280  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.001793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.018442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.019878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.501246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.520476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.520774  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.525872  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.018221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.019989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.025817  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.501814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.518070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.520290  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.526096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.002018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.019705  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.021053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.026071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.501728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.519405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.520617  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.001744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.019715  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.020644  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.025597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.502175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.519303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.520222  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.526675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.001582  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.018997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.020524  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.025085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.501770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.519601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.520468  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.525222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.002719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.018650  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.020825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.026802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.501690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.517716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.520832  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.525983  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.002212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.017751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.019488  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.025775  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.501873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.519825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.526640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.001148  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.019815  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.025796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.502066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.518977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.520625  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.527501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.000982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.018045  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.019539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.026321  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.502967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.517882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.520453  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.525074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.002093  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.019794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.021920  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.025114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.502294  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.517914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.519213  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.526478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.528534  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:30.001669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.023801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.027674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.029691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:30.252885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.252962  535088 retry.go:31] will retry after 26.857733331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.501958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.518713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.001425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.019226  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.020064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.501882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.518669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.519450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.526794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.001295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.018253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.026067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.501521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.520301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.522051  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.526250  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.003215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.018591  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.020188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.026759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.518399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.520442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.526258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.001781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.019409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.026569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.501910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.518388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.526549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.002205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.019931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.501124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.517626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.519260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.526635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.001556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.017651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.020209  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.026600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.501047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.520391  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.001745  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.017677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.019854  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.504677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.518518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.519504  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.527753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.020360  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.026665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.501370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.517442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.519287  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.525990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.017774  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.019461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.026372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.500859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.519797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.520622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.525917  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.001647  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.017652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.019113  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.025818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.518504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.520340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.526037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.002231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.017533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.019687  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.025641  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.501410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.518018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.527062  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.018556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.020009  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.501909  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.519346  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.521539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.525544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.003422  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.020340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.026621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.501787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.517772  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.520385  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.526006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.001729  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.018572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.020505  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.027512  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.500861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.517878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.519941  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.525966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.002733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.022017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.023425  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.027913  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.501505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.518036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.518304  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.526497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.000839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.018027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.020574  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.025140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.517267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.519576  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.002664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.019029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.020440  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.026307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.502751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.518532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.525668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.001531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.017987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.018860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.501993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.519439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.520680  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.525869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.003110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.020088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.020281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.518761  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.520450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.526669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.019111  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.020657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.025651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.501137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.519077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.519422  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.526050  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.002264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.017514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.020444  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.026653  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.501218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.517606  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.519711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.525538  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.019403  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.027381  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.501030  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.519679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.520880  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.525311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.002074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.017920  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.020689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.025485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.501565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.518005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.518985  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.525510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.018972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.501041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.519696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.520156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.526253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.003167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.017108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.020966  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.025536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.501588  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.519412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.520387  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.526801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.001703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.019805  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.025874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.501547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.518508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.519409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.527341  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.001269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.017737  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.019765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.026345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.111554  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:57.502821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.521781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.523859  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:57.837380  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:46:57.837579  535088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:46:58.002477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.017866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.019513  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.025873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:58.501877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.518871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.519700  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.525438  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.004488  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.026436  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.031423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.033704  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.508129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.521490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.521737  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.526781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.003739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.022791  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.022910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.026491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.501517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.517703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.518550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.528527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.010322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.026679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.030087  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.030397  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.502386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.517530  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.522260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.532240  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.002156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.022137  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.023086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.026049  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.504322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.519252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.523461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.528764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.004016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.019471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.021442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.026419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.504419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.520406  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.525550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.002462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.020193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.021462  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.501642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.517490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.519930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.526445  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.005197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.023123  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.029475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.502664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.518118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.520518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.002738  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.019575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.022744  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.026515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.502554  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.519943  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.521590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.526208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.004023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.019789  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.020273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.026416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.504157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.518612  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.520773  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.527827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.007295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.020757  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.024258  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.031878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.505225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.518839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.521622  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.525366  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.003369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.024660  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.024787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.029399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.502978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.520074  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.520999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.527832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.002118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.019490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.019688  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.026021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.502365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.517980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.519426  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.000763  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.017778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.019554  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.025361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.502621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.519369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.520248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.525881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.001298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.019652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.020408  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.026077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.506179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.518698  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.520608  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.525646  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.004165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.021172  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.026558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.502399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.517614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.520163  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.526224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.002692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.018788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.502451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.519291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.520395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.528734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.001583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.017574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.019594  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.027073  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.502087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.518165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.518856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.526691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.002848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.019225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.020564  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.025778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.518991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.520609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.525245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.001845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.019346  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.019684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.026396  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.502188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.517746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.520856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.525856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.001858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.021348  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.026925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.517522  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.520124  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.525853  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.001850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.019071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.020953  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.502259  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.517542  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.520882  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.526825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.001558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.018927  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.020008  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.025511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.501320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.517732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.519487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.526814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.001370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.018101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.019530  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.501703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.519684  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.526074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.001809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.019534  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.025673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.501888  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.520695  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.521501  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.527625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.001636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.017676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.019410  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.026546  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.001469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.018681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.026297  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.500658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.517656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.520275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.526953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.002390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.018753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.021470  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.026724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.503080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.522083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.525703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.001480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.018730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.019775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.025922  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.501850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.518460  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.520597  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.526270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.002686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.017503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.019988  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.026061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.501773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.519208  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.519306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.526944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.001885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.020961  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.026254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.519603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.521180  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.526295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.003607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.018630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.021082  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.026312  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.501919  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.519736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.002036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.018828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.502329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.517607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.520177  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.527152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.003066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.020280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.020496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.026046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.503011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.519101  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.520154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.525819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.001349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.017760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.020383  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.026548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.501020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.519372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.520621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.001939  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.017981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.018721  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.025389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.502684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.519286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.519798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.526360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.001915  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.018089  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.018866  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.025884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.502109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.518315  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.520992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.001980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.020058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.020195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.502513  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.519131  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.519364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.526431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.001532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.017633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.019879  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.501267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.517441  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.519775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.526367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.002311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.017625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.025830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.502486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.518494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.519337  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.526256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.002200  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.017679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.020437  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.025635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.502121  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.518742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.519609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.525528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.001668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.017868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.019195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.027138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.502726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.518837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.519527  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.525448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.037966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.038824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.039617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.039888  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.510995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.611235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.611494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.612020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.007852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.104319  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.105167  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.106241  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.503207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.519701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.523717  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.528111  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.002832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.019368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.026027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.028968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.504592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.518781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.522913  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.527017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.021540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.022732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.027733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.501969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.523064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.523122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.526723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.016033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.048228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.048288  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.049707  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.510334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.517005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.520734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.527760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.002493  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.025067  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.025090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.030831  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.503106  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.519233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.522740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.526357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.003368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.021702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.023084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.025372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.507201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.528398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.528540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.528597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.005313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.021521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.023522  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.030205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.508306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.517975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.523254  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.528801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.004599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.018025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.024054  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.030295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.504150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.519937  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.527633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.003426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.021317  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.104457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.105285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.520941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.521038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.525762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.002168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.018353  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.019606  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.025332  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.501342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.518265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.520375  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.001482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.018509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.018674  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.026149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.502439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.518320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.519717  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.525114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.019121  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.026265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.501713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.525722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.001345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.020275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.025637  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.518670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.520663  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.525659  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.001263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.019116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.025335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.502071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.519000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.519010  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.525456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.021189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.026699  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.502333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.517379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.519350  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.526773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.001599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.018008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.020215  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.025828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.501455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.517521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.519235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.527201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.001827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.020037  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.020749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.025827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.503759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.517849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.520371  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.526800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.002360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.017843  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.026527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.501394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.520352  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.525725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.002102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.017074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.020520  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.026683  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.502383  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.517821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.520938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.525444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.004519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.104585  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.104625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.104775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.501109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.518462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.519031  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.018255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.019640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.025291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.503231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.518634  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.520274  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.526356  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.002389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.018529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.019411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.026657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.501043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.518076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.519080  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.526504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.001361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.019762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.022333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.025239  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.501714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.519163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.521149  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.526410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.000747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.019676  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.020330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.026159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.502467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.518491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.518845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.525769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.001664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.019454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.019620  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.027022  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.502850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.518666  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.520316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.526009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.002470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.017750  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.019816  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.025697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.501760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.519481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.519738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.525711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.001752  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.017749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.019804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.025660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.501792  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.519794  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.525244  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.002742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.018517  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.020369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.026630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.501587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.518305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.519219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.526380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.000977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.018805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.019761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.501890  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.517987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.520782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.525601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.001949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.018921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.020592  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.026413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.501660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.518677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.518948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.525564  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.017692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.019759  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.025724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.503245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.519474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.520078  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.525649  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.002655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.017994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.020743  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.025544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.500866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.519004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.520797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.527102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.001891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.019380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.020948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.025584  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.502039  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.519170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.520827  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.002597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.018456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.019344  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.501808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.518199  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.520114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.000809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.017935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.019860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.026010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.502293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.517549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.520189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.603271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.001815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.018392  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.020440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.025577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.501456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.517675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.519938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.525413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.000943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.017838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.021846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.026719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.502498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.518370  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.526307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.002824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.019386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.027193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.501577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.518262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.525078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.002037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.020156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.021197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.025423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.501921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.519607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.520544  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.001960  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.018434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.020315  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.026179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.518911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.520556  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.525269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.002029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.024168  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.026997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.031803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.502358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.518786  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.525830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.017338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.018324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.503054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.520388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.521916  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.526202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.002517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.020216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.021156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.028984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.500976  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.519154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.519316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.526809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.002882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.019205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.020141  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.026965  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.501036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.518337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.519991  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.525486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.018947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.019127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.025725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.501581  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.518560  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.520017  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.525518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.001825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.018331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.020369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.026403  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.501127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.519632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.520978  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.525884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.002361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.027021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.525535  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.002688  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.017322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.019838  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.025328  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.501474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.517324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.519128  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.525804  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.001640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.017615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.019699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.025407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.501333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.518228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.520320  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.526401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.001257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.017769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.019813  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.501852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.517912  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.519457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.525502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.001036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.018891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.019341  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.501891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.517945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.519845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.525477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.018364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.019047  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.025949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.501632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.517753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.519551  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.525075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.019109  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.021003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.025940  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.503032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.518866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.520801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.525566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.002115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.017835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.020583  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.026191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.502465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.517620  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.520272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.000870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.018932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.019718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.025748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.502491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.525784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.001520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.020061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.026348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.519550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.519863  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.526033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.001475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.018365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.019331  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.026308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.502572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.517421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.520211  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.525925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.001941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.019309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.020493  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.027497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.501822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.520262  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.526454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.003835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.019771  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.020317  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.025953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.501469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.517769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.519531  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.526394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.001467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.018767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.018975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.025574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.501327  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.517147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.519793  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.525870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.001711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.019756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.022733  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.025432  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.501110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.520152  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.526331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.001665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.018212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.020818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.027301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.502145  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.518137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.520139  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.002613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.018231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.019849  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.501054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.518385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.526209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.017824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.020797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.026068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.501618  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.519136  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.519498  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.526198  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.001727  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.019695  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.020007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.026210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.502382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.518209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.520090  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.526008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.002275  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.017575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.020217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.026182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.501858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.518887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.520199  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.525849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.001391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.017528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.019856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.026978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.502108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.517185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.519497  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.526193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.002439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.026369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.502252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.518245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.519830  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.525789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.002157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.017975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.020029  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.026100  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.504825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.517735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.522486  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.528548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.005615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.019305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.021640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.027410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.501443  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.519328  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.519829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.526094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.001398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.019374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.020621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.024951  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.501419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.517860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.519006  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.525945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.017274  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.019058  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.501980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.517824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.519466  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.001604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.019698  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.501302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.517854  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.519844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.526844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.017746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.020114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.519009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.520308  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.525824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.001176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.017056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.019336  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.026011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.502015  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.518868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.519785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.525794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.002253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.017282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.020639  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.026305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.518058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.519766  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.525982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.018418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.021050  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.026140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.502619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.517497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.519971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.526180  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.002367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.018215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.020881  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.025867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.502163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.518906  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.519560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.525238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.002160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.018131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.019720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.026035  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.501498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.517861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.520038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.525911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.008043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.108599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.108605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.108940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.501986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.519116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.519363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.526237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.002941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.019968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.026086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.501165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.518371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.519716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.526191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.003221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.017756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.020569  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.025532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.502303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.517833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.520043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.526299  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.001963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.019603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.020175  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.026074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.518548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.526362  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.001337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.017680  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.020642  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.025160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.501481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.519187  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.519354  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.526002  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.001164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.017266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.020018  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.025815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.501835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.518458  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.519449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.526988  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.001942  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.017559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.019230  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.027617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.501568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.518953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.519722  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.000827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.017696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.019798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.501984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.519229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.525931  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.002067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.018520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.020314  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.026702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.501478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.518992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.519109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.525577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.001061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.019049  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.019914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.025870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.501375  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.517502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.520013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.525860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.002219  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.018451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.019784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.025779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.503078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.519485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.528833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.001789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.017702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.019708  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.025298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.517966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.520785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.526958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.017726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.019345  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.026841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.501551  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.520217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.526558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.001536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.018736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.020611  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.025440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.519745  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.526510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.002283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.017864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.019800  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.025916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.502006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.519062  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.519655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.005447  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.017234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.019831  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.026557  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.501996  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.519856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.520083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.525230  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.002748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.019533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.025957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.502580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.519968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.525850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.019152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.019529  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.025144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.503036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.518401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.520738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.525739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.001970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.026543  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.505234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.517615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.520770  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.525690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.018177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.019004  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.025710  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.502094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.519521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.520380  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.526127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.002068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.020224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.021127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.025520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.501694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.518963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.520765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.525058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.007417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.024784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.025732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.520851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.521975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.528716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.002656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.019037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.501702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.521095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.526859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.002583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.020457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.025456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.502095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.518464  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.522059  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.526260  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.003337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.017841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.021116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.025850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.518762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.520412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.527410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.001848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.018927  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.019525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.501555  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.518984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.519924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.526028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.002318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.018839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.021112  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.025766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.501254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.518654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.520701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.525608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.001830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.017870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.020014  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.026744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.501677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.519613  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.519874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.526220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.002947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.019118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.020560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.025161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.501842  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.519678  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.003014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.018826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.020409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.026088  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.501916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.518127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.520850  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.525382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.001229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.017453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.019095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.026360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.502510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.517380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.518702  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.001216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.018086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.020349  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.026668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.502075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.518995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.519726  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.526262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.011176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.018083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.022218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.026390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.518961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.519981  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.525961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.002956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.018416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.020053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.026871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.503382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.518628  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.520030  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.526081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.004511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.017733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.019809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.026157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.502455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.517764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.519007  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.525748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.002201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.018354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.020561  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.024986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.501676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.518080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.520259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.526231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.002290  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.017246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.019747  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.025424  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.502256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.519181  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.519361  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.526313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.001733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.017924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.019432  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.024916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.501788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.518994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.520329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.526158  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.002306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.017816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.020329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.026122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.502214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.517689  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.519368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.526566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.001344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.018348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.021395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.026118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.502411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.519218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.519487  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.526004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.002233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.017415  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.020521  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.026057  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.518860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.520188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.526090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.002091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.018506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.019711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.025910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.502421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.518400  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.521296  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.527921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.003104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.018378  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.020878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.026161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.518686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.520170  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.525923  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.004390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.019175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.022158  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.026467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.504086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.520367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.520550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.525380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.002978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.103477  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.103494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.104185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.502233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.519809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.000496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.018444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.019039  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.026510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.502226  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.517482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.520689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.525876  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.001596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.021682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.025805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.517889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.520740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.526273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.001808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.018410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.020658  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.025282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.502482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.517540  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.520502  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.525363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.002384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.018017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.020110  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.026034  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.505672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.520527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.523748  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.529163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.002861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.017744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.019716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.025934  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.503141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.517174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.519166  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.001342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.017719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.020032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.026547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.501789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.519072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.519782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.525316  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.002325  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.017470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.021020  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.026334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.504006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.518610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.525227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.003295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.018224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.023940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.028747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.507809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.522785  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.523541  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.527593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.006856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.021835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.023449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.029978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.506277  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.523013  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.524326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.531084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.006985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.018665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.023247  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.026006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.503056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.519576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.522065  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.003139  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.020881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.022886  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.028847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.502733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.521726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.530711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.532556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.002638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.021902  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.026061  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.027811  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.501943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.518059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.520358  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.527803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.001212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.022110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.023066  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.027074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.511753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.522407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.525249  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.528427  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.003779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.019398  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.020765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.025087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.502271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.519021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.520012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.028122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.028948  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.029097  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.503552  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.519454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.526099  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.528549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.002150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.018589  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.020579  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:50:00.026070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.503019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.518818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.521298  535088 kapi.go:107] duration metric: took 4m27.50578325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:50:00.526236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.004597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.017417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.026007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.503117  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.526118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.002140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.017309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.026874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.517206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.526479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.002066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.018800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.026667  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.501870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.526907  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.001943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.018110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.026258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.503167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.518066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.526754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.007821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.017748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.025450  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.501643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.518495  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.001380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.017918  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.026946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.502671  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.518784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.526820  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.001754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.019448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.502164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.517678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.526283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.002858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.019273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.027420  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.501670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.518047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.526214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.001840  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.018206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.027687  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.501188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.526417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.001069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.018157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.026212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.502289  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.518055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.526968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.001635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.017991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.025970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.506621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.517412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.001701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.018119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.025969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.502625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.517475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.526044  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.002186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.018439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.026091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.519505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.525838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.018285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.027576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.501280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.517529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.526733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.002377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.018228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.502885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.517651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.527123  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.001756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.018508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.026298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.503500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.526229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.005499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.105592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.105644  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.501723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.518760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.525930  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.009252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.020798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.026084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.502008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.518188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.526054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.001524  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.017526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.026186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.501501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.517658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.526525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.001537  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.017379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.027037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.501883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.518635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.525619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.001489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.026672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.501586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.517885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.526477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.000991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.019224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.027309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.502253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.526007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.002357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.017858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.027027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.500869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.517747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.526047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.002561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.018227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.027043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.502430  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.518125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.526108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.002567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.017833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.025933  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.517859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.526354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.000814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.017887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.026568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.502946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.518678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.526480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.001266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.017216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.026609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.501961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.526911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.002183  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.017072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.026509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.503467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.525800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.001730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.018081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.026318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.503000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.518477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.525663  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.001609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.018380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.027170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.502338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.518067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.526337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.001716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.019042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.026553  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.502516  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.517742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.526076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.003220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.017115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.026003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.503084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.520638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.525815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.002310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.017855  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.026358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.501484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.518215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.527345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.001194  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.018531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.026371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.518822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.526665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.000987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.018881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.026261  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.503065  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.519434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.002048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.019887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.026789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.502205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.527132  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.027636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.502137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.518679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.526770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.002674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.018502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.025131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.502841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.518479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.525394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.003210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.017479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.026633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.501409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.517624  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.525765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.001504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.017795  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.026635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.504580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.518573  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.526384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.000864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.018489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.025191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.501782  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.518173  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.526463  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.000518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.017873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.027131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.502017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.518539  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.526000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.002999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.018398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.027329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.501816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.518023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.526878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.002714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.018483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.026808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.502514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.517486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.525494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.000916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.017682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.026270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.504311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.517633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.529587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.005819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.019419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:46.028247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.501836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.603570  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.604017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.002957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.020722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.103677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:47.504417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.529109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.535255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.027116  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.027384  535088 kapi.go:107] duration metric: took 5m10.029733807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:50:48.029168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.029460  535088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-994396 cluster.
	I1101 08:50:48.030850  535088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:50:48.032437  535088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:50:48.524544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.531119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.018726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.026282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.518154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.526614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.018751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.026031  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.518756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.018153  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.518286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.017371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.027754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.518074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.526416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.018974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.026602  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.518144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.526654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.018625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.026704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.517492  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.525999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.019257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.027958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.518075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.526142  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.018092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.025605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.518596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.525863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.017562  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.025851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.518709  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.526387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.025978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.517643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.525642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.018664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.025863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.517006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.527349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.020576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.029108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.518333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.527511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.018504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.027157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.518405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.526704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.018500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.026694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.517768  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.526967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.018243  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.026700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.526719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.017510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.025944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.517662  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.526213  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.019140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.522889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.526826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.017784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.026272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.517992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.527109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.018586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.026175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.518974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.526376  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.018995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.026615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.517947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.526011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.018511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.025631  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.518218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.526593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.018682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.026784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.527301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.018993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.518483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.526408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.018208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.027483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.518108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.528506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.018723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.026036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.519547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.525883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.017886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.026485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.518428  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.526099  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.018816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.028223  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.517235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.019497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.026823  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.518374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.526536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.019643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.026636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.519221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.527357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.018310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.027561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.517385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.526970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.018802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.026280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.527610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.017707  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.028465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.518519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.526293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:21.026625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:21.030779  535088 kapi.go:107] duration metric: took 5m45.508455317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:51:21.518734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.018071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.517851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.022943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.518235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.018970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.517611  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.019971  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.519134  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.018419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.518767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.018701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.519283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.019085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.518032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.019182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.519048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.018264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.018124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.021956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.519959  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:33.014506  535088 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1101 08:51:33.014547  535088 kapi.go:107] duration metric: took 6m0.000528296s to wait for kubernetes.io/minikube-addons=registry ...
	W1101 08:51:33.014668  535088 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1101 08:51:33.016548  535088 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:51:33.017988  535088 addons.go:515] duration metric: took 6m9.594756816s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1101 08:51:33.018036  535088 start.go:247] waiting for cluster config update ...
	I1101 08:51:33.018057  535088 start.go:256] writing updated cluster config ...
	I1101 08:51:33.018363  535088 ssh_runner.go:195] Run: rm -f paused
	I1101 08:51:33.027702  535088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:33.035072  535088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.039692  535088 pod_ready.go:94] pod "coredns-66bc5c9577-2rqh8" is "Ready"
	I1101 08:51:33.039727  535088 pod_ready.go:86] duration metric: took 4.614622ms for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.041954  535088 pod_ready.go:83] waiting for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.046075  535088 pod_ready.go:94] pod "etcd-addons-994396" is "Ready"
	I1101 08:51:33.046103  535088 pod_ready.go:86] duration metric: took 4.126087ms for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.048189  535088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.052772  535088 pod_ready.go:94] pod "kube-apiserver-addons-994396" is "Ready"
	I1101 08:51:33.052802  535088 pod_ready.go:86] duration metric: took 4.587761ms for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.055446  535088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.433771  535088 pod_ready.go:94] pod "kube-controller-manager-addons-994396" is "Ready"
	I1101 08:51:33.433801  535088 pod_ready.go:86] duration metric: took 378.329685ms for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.634675  535088 pod_ready.go:83] waiting for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.034403  535088 pod_ready.go:94] pod "kube-proxy-fbmdq" is "Ready"
	I1101 08:51:34.034444  535088 pod_ready.go:86] duration metric: took 399.738812ms for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.233978  535088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633095  535088 pod_ready.go:94] pod "kube-scheduler-addons-994396" is "Ready"
	I1101 08:51:34.633131  535088 pod_ready.go:86] duration metric: took 399.109096ms for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633149  535088 pod_ready.go:40] duration metric: took 1.605381934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:34.682753  535088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:51:34.684612  535088 out.go:179] * Done! kubectl is now configured to use "addons-994396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.497137532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ee1a026-2a63-4b17-85cd-2222174897c7 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.498692517Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b07ef891-2e20-40f6-9791-375140b34f7c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.499994548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987850499854524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b07ef891-2e20-40f6-9791-375140b34f7c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.500609788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c7e1c9b-c5e3-4d63-803b-623e074fb150 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.500662416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c7e1c9b-c5e3-4d63-803b-623e074fb150 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.501266586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha
256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetad
ata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb427
3f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761986758449407963,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,PodSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:
1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c7e1c9b-c5e3-4d63-803b-623e074fb150 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.541171079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=785a0e28-f951-4e64-a217-b45e584c0707 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.541360307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=785a0e28-f951-4e64-a217-b45e584c0707 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.543591499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33df3c94-8f80-4393-b0c5-ab86c116eef1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.544841938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987850544814098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33df3c94-8f80-4393-b0c5-ab86c116eef1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.545722329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17ca0019-4d8a-42fa-8ad2-3fbe95c14c48 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.545816617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17ca0019-4d8a-42fa-8ad2-3fbe95c14c48 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.546253200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha
256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetad
ata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb427
3f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761986758449407963,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,PodSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:
1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17ca0019-4d8a-42fa-8ad2-3fbe95c14c48 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.584850827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c85ba24-0c9e-4d94-afef-14afc3ecb5fb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.585143070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c85ba24-0c9e-4d94-afef-14afc3ecb5fb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.586751179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e65ee811-c38c-4717-a518-c747ed33206f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.587802194Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987850587777554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e65ee811-c38c-4717-a518-c747ed33206f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.588708216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e41cdb68-c82a-4b47-99d3-4303b21f45b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.588936905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e41cdb68-c82a-4b47-99d3-4303b21f45b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.589363276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha
256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetad
ata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb427
3f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761986758449407963,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,PodSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:
1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e41cdb68-c82a-4b47-99d3-4303b21f45b3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.596299039Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c1095848-af9b-4687-9645-91b473b92df6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.596720317Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b3b9c928272a2058f610b901827dfea1248e8f5a0da2fdafa840826ebc6e7983,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,Uid:c2c6242b-10ca-4397-9d00-1e3f0d0aa51b,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761987674089796874,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2c6242b-10ca-4397-9d00-1e3f0d0aa51b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:01:13.772725845Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a688e95ff774d333d03aeba9040f6474240997aeddde89b8afd82798cc9e706,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9c49ac5d-18e5-470b-9217-c0a58f0636a1,Nam
espace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987369396636901,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c49ac5d-18e5-470b-9217-c0a58f0636a1,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:56:09.077414941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5a1f5307a5a0e8d620f46ea3fb4500fae706cd5d81b910f9344a2dc34840763,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:8623da74-791e-4fd6-a974-60ebca5738a7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987164436439077,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8623da74-791e-4fd6-a974-60ebca5738a7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:52:44.116093759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSand
box{Id:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&PodSandboxMetadata{Name:busybox,Uid:4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987095651519563,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:51:35.327103269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-9cxnd,Uid:bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986982879427207,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-
auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.720554779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-6ptqs,Uid:9fe7abf8-c7e2-47ee-ac99-699c34674a22,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761986733549694504,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 608bce68-2083-4523-b519-13c4d6cad8fa,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 608bce68-2083-4523-b519-13c4d6cad8fa,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.773581153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-dmt9r,Uid:7e49bedc-b72d-400d-bc07-62040e55ac39,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761986733206850623,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: bb2b857a-ecad-44c0-93d8-e9ecb84ec3bf,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: bb2b857a-ecad-44c0-93d8-e9ecb84ec3bf,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.824364839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&PodSandboxMetadata{Name:gadget-z8nnd,Uid:c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986732252775766,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-11-01T08:45:31.810689200Z
,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-9ghvj,Uid:d3c3231a-40d9-42f1-bc78-e2d1a104327a,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731585408537,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:30.990687010Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a0182754-0c9c-458b-a340-20ec025cb
56c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731574668336,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",
\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:30.530361901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:d947f942-2149-492a-9b4e-1f9c22405815,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731411874379,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minik
ube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:29.770167923Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&PodSandboxMetadata{Name:registry-proxy-bzs78,Uid:151e456a-63e0-452
7-8511-34c4444fef48,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731364422760,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 65b944f647,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.495875265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b06b6cc06bc5fa49dc1e6aa03c98e75401763147b91202b99f1d103ce1ee29d2,Metadata:&PodSandboxMetadata{Name:registry-6b586f9694-b4ph6,Uid:f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731333681368,Labels:map[string]string{actual-registry: true,addonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.p
od.name: registry-6b586f9694-b4ph6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,kubernetes.io/minikube-addons: registry,pod-template-hash: 6b586f9694,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.152437473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-vssmp,Uid:a3b8c16e-b583-47df-a5c2-97218d3ec5be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986727049009432,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:
45:26.718957327Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2rqh8,Uid:b131b2b2-f9b9-4197-8bc7-4d1bc185c804,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986724017093656,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.654384746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&PodSandboxMetadata{Name:kube-proxy-fbmdq,Uid:dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986723855325038,Labels:map[string]string{controlle
r-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.475753329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&PodSandboxMetadata{Name:etcd-addons-994396,Uid:31d081dd6df6b55662a095a017ad5712,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711956221288,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 31d0
81dd6df6b55662a095a017ad5712,kubernetes.io/config.seen: 2025-11-01T08:45:11.165275870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-994396,Uid:5912e2b5f9c4192157a57bf3d5021f7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711949626239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5912e2b5f9c4192157a57bf3d5021f7e,kubernetes.io/config.seen: 2025-11-01T08:45:11.165273714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addo
ns-994396,Uid:e0eeda84be59c6c1c023d04bf2f88758,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711947877914,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0eeda84be59c6c1c023d04bf2f88758,kubernetes.io/config.seen: 2025-11-01T08:45:11.165274783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-994396,Uid:abcff5cb337834c6fd7a11d68a6b7be4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711944495415,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: abcff5cb337834c6fd7a11d68a6b7be4,kubernetes.io/config.seen: 2025-11-01T08:45:11.165269521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c1095848-af9b-4687-9645-91b473b92df6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.597684616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03388c6d-aca1-445b-bcc3-643813e2ea57 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.597756838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03388c6d-aca1-445b-bcc3-643813e2ea57 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:04:10 addons-994396 crio[817]: time="2025-11-01 09:04:10.598154519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha
256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetad
ata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb427
3f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761986758449407963,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,PodSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:
1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03388c6d-aca1-445b-bcc3-643813e2ea57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9aac7eb346903       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          12 minutes ago      Running             busybox                   0                   cdbcecc3e9d43       busybox
	f73cee1644b03       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             14 minutes ago      Running             controller                0                   147663b03fe63       ingress-nginx-controller-675c5ddd98-9cxnd
	7fbb154c5ba00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   16 minutes ago      Exited              patch                     0                   058d4f2c90db7       ingress-nginx-admission-patch-dmt9r
	5e6c68a57ee53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   16 minutes ago      Exited              create                    0                   a5dfb28615faf       ingress-nginx-admission-create-6ptqs
	6d2226436f827       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac              17 minutes ago      Running             registry-proxy            0                   c449271f0824b       registry-proxy-bzs78
	dda41d22ea7ff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            17 minutes ago      Running             gadget                    0                   e07af8e7a3eca       gadget-z8nnd
	9b56bd6c195bd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             17 minutes ago      Running             local-path-provisioner    0                   6d69749ca9bc7       local-path-provisioner-648f6765c9-9ghvj
	7b4c1be283a7f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               18 minutes ago      Running             minikube-ingress-dns      0                   9f7ac0dd48cc1       kube-ingress-dns-minikube
	2ad7748982f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             18 minutes ago      Running             storage-provisioner       0                   ca1dd787f338a       storage-provisioner
	9bb5f4d4e768d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     18 minutes ago      Running             amd-gpu-device-plugin     0                   fec37181f6706       amd-gpu-device-plugin-vssmp
	9d0ff7b8e8784       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             18 minutes ago      Running             coredns                   0                   d62d15d11c495       coredns-66bc5c9577-2rqh8
	9d0a2f86b38f4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             18 minutes ago      Running             kube-proxy                0                   e1fb2fcb1123b       kube-proxy-fbmdq
	80489befa62b8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             18 minutes ago      Running             kube-scheduler            0                   d4cfa30f1a32a       kube-scheduler-addons-994396
	844d913e662bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             18 minutes ago      Running             etcd                      0                   e2f739ab181cd       etcd-addons-994396
	35bb45a49c1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             18 minutes ago      Running             kube-controller-manager   0                   80615bf9878bb       kube-controller-manager-addons-994396
	fdeec4098b47d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             18 minutes ago      Running             kube-apiserver            0                   f1c88f09470e5       kube-apiserver-addons-994396
	
	
	==> coredns [9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1] <==
	[INFO] 10.244.0.8:47911 - 57239 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000305908s
	[INFO] 10.244.0.8:59397 - 18145 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000239909s
	[INFO] 10.244.0.8:59397 - 38973 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000143846s
	[INFO] 10.244.0.8:59397 - 1868 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000323341s
	[INFO] 10.244.0.8:59397 - 54937 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000434842s
	[INFO] 10.244.0.8:59397 - 22504 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000123757s
	[INFO] 10.244.0.8:59397 - 45083 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000177762s
	[INFO] 10.244.0.8:59397 - 38120 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000090048s
	[INFO] 10.244.0.8:59397 - 33063 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000129098s
	[INFO] 10.244.0.8:44777 - 20029 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001012885s
	[INFO] 10.244.0.8:44777 - 28204 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00065373s
	[INFO] 10.244.0.8:44777 - 60668 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000119467s
	[INFO] 10.244.0.8:44777 - 14527 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001044666s
	[INFO] 10.244.0.8:44777 - 30066 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000355708s
	[INFO] 10.244.0.8:44777 - 23665 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000274884s
	[INFO] 10.244.0.8:44777 - 60783 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000225149s
	[INFO] 10.244.0.8:44777 - 27437 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000069983s
	[INFO] 10.244.0.8:41235 - 18740 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000199141s
	[INFO] 10.244.0.8:41235 - 48489 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00013151s
	[INFO] 10.244.0.8:41235 - 38783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106246s
	[INFO] 10.244.0.8:41235 - 46401 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000259021s
	[INFO] 10.244.0.8:41235 - 46898 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000094585s
	[INFO] 10.244.0.8:41235 - 41125 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000766568s
	[INFO] 10.244.0.8:41235 - 22371 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000177001s
	[INFO] 10.244.0.8:41235 - 35572 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000941677s
	
	
	==> describe nodes <==
	Name:               addons-994396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-994396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-994396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-994396
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:45:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-994396
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:04:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:01:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:01:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:01:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:01:28 +0000   Sat, 01 Nov 2025 08:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-994396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 47158355a9594cbf84ea23a10000597a
	  System UUID:                47158355-a959-4cbf-84ea-23a10000597a
	  Boot ID:                    8b22796c-545f-4b51-954a-eb39441cd160
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-z8nnd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9cxnd    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         18m
	  kube-system                 amd-gpu-device-plugin-vssmp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-66bc5c9577-2rqh8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     18m
	  kube-system                 etcd-addons-994396                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         18m
	  kube-system                 kube-apiserver-addons-994396                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-994396        200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-fbmdq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-addons-994396                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-6b586f9694-b4ph6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 registry-proxy-bzs78                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  local-path-storage          local-path-provisioner-648f6765c9-9ghvj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                kubelet          Node addons-994396 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node addons-994396 event: Registered Node addons-994396 in Controller
	
	
	==> dmesg <==
	[  +3.822306] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.002792] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 1 08:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.240953] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 08:50] kauditd_printk_skb: 17 callbacks suppressed
	[ +34.452421] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:51] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.931610] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 08:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.008516] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.922747] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.151130] kauditd_printk_skb: 37 callbacks suppressed
	[ +11.857033] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 08:54] kauditd_printk_skb: 26 callbacks suppressed
	[ +40.501255] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:55] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 08:56] kauditd_printk_skb: 45 callbacks suppressed
	[Nov 1 08:57] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 08:59] kauditd_printk_skb: 107 callbacks suppressed
	[Nov 1 09:01] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 09:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde] <==
	{"level":"info","ts":"2025-11-01T08:47:54.978301Z","caller":"traceutil/trace.go:172","msg":"trace[127276739] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"193.938157ms","start":"2025-11-01T08:47:54.784350Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[127276739] 'process raft request'  (duration: 193.811655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:03.807211Z","caller":"traceutil/trace.go:172","msg":"trace[306428088] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"143.076836ms","start":"2025-11-01T08:50:03.664107Z","end":"2025-11-01T08:50:03.807184Z","steps":["trace[306428088] 'process raft request'  (duration: 142.860459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:30.399983Z","caller":"traceutil/trace.go:172","msg":"trace[417490432] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"105.005558ms","start":"2025-11-01T08:50:30.294965Z","end":"2025-11-01T08:50:30.399970Z","steps":["trace[417490432] 'process raft request'  (duration: 104.840267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785305Z","caller":"traceutil/trace.go:172","msg":"trace[446064097] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1675; }","duration":"202.139299ms","start":"2025-11-01T08:51:25.583130Z","end":"2025-11-01T08:51:25.785270Z","steps":["trace[446064097] 'read index received'  (duration: 202.133895ms)","trace[446064097] 'applied index is now lower than readState.Index'  (duration: 4.594µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:51:25.785474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.320618ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:51:25.785498Z","caller":"traceutil/trace.go:172","msg":"trace[2127751376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1576; }","duration":"202.392505ms","start":"2025-11-01T08:51:25.583101Z","end":"2025-11-01T08:51:25.785493Z","steps":["trace[2127751376] 'agreement among raft nodes before linearized reading'  (duration: 202.298341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785518Z","caller":"traceutil/trace.go:172","msg":"trace[25251410] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"230.552599ms","start":"2025-11-01T08:51:25.554955Z","end":"2025-11-01T08:51:25.785507Z","steps":["trace[25251410] 'process raft request'  (duration: 230.448007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027453Z","caller":"traceutil/trace.go:172","msg":"trace[1612683542] linearizableReadLoop","detail":"{readStateIndex:1872; appliedIndex:1872; }","duration":"169.871386ms","start":"2025-11-01T08:52:17.857553Z","end":"2025-11-01T08:52:18.027424Z","steps":["trace[1612683542] 'read index received'  (duration: 169.865757ms)","trace[1612683542] 'applied index is now lower than readState.Index'  (duration: 4.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:18.027601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.004057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:18.027618Z","caller":"traceutil/trace.go:172","msg":"trace[354966435] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1760; }","duration":"170.064613ms","start":"2025-11-01T08:52:17.857549Z","end":"2025-11-01T08:52:18.027613Z","steps":["trace[354966435] 'agreement among raft nodes before linearized reading'  (duration: 169.976661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027617Z","caller":"traceutil/trace.go:172","msg":"trace[182557049] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1761; }","duration":"175.595316ms","start":"2025-11-01T08:52:17.852012Z","end":"2025-11-01T08:52:18.027607Z","steps":["trace[182557049] 'process raft request'  (duration: 175.503416ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.484737Z","caller":"traceutil/trace.go:172","msg":"trace[1326759402] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1904; }","duration":"340.503004ms","start":"2025-11-01T08:52:23.144214Z","end":"2025-11-01T08:52:23.484717Z","steps":["trace[1326759402] 'read index received'  (duration: 340.496208ms)","trace[1326759402] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:23.485008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.771395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2025-11-01T08:52:23.485058Z","caller":"traceutil/trace.go:172","msg":"trace[1039449345] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1790; }","duration":"340.841883ms","start":"2025-11-01T08:52:23.144209Z","end":"2025-11-01T08:52:23.485051Z","steps":["trace[1039449345] 'agreement among raft nodes before linearized reading'  (duration: 340.62868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485106Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.144193Z","time spent":"340.902265ms","remote":"127.0.0.1:36552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T08:52:23.485553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.574901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:23.485588Z","caller":"traceutil/trace.go:172","msg":"trace[1585287071] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1791; }","duration":"287.617514ms","start":"2025-11-01T08:52:23.197963Z","end":"2025-11-01T08:52:23.485581Z","steps":["trace[1585287071] 'agreement among raft nodes before linearized reading'  (duration: 287.549253ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.485660Z","caller":"traceutil/trace.go:172","msg":"trace[1103263823] transaction","detail":"{read_only:false; response_revision:1791; number_of_response:1; }","duration":"361.459988ms","start":"2025-11-01T08:52:23.124191Z","end":"2025-11-01T08:52:23.485651Z","steps":["trace[1103263823] 'process raft request'  (duration: 361.180443ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485795Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.124175Z","time spent":"361.507625ms","remote":"127.0.0.1:36760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-11-01T08:55:13.580313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1434}
	{"level":"info","ts":"2025-11-01T08:55:13.648379Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1434,"took":"67.304726ms","hash":2547452093,"current-db-size-bytes":5730304,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-11-01T08:55:13.648498Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2547452093,"revision":1434,"compact-revision":-1}
	{"level":"info","ts":"2025-11-01T09:00:13.589551Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2183}
	{"level":"info","ts":"2025-11-01T09:00:13.612749Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2183,"took":"22.131368ms","hash":475132686,"current-db-size-bytes":5730304,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3395584,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2025-11-01T09:00:13.612805Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":475132686,"revision":2183,"compact-revision":1434}
	
	
	==> kernel <==
	 09:04:10 up 19 min,  0 users,  load average: 0.10, 0.20, 0.30
	Linux addons-994396 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:48:03.297742       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	E1101 08:48:03.298496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	I1101 08:48:03.353240       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:52:03.525330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42910: use of closed network connection
	E1101 08:52:03.723785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42940: use of closed network connection
	I1101 08:52:12.984624       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.226.149"}
	I1101 08:53:04.341444       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 08:55:15.302985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 08:56:08.891135       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:56:09.140799       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.237.168"}
	I1101 08:58:47.522658       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:58:47.523024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:58:47.568498       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:58:47.568590       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:58:47.569465       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:58:47.569533       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:58:47.596280       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:58:47.596336       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:58:47.604563       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:58:47.604642       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 08:58:48.571817       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1101 08:58:48.604641       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1101 08:58:48.640767       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8] <==
	E1101 09:01:19.109030       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:01:19.110069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:01:55.568432       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:01:55.570014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:01:56.699667       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:01:56.701103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:02:14.568431       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:02:14.569965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:02:22.430103       1 csi_attacher.go:520] kubernetes.io/csi: Attach timeout after 2m0s [volume=20f8360d-b700-11f0-8520-5a386f3b776b; attachment.ID=csi-ec97830fc071971062432de05004c36b849e526217445c8f5764f892af15b17d]
	E1101 09:02:22.430279       1 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^20f8360d-b700-11f0-8520-5a386f3b776b podName: nodeName:}" failed. No retries permitted until 2025-11-01 09:02:22.930243737 +0000 UTC m=+1030.550365978 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-fff0d080-eca7-4751-8bfb-428597d20c3b" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^20f8360d-b700-11f0-8520-5a386f3b776b") from node "addons-994396" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 20f8360d-b700-11f0-8520-5a386f3b776b
	I1101 09:02:22.983204       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^20f8360d-b700-11f0-8520-5a386f3b776b" nodeName="addons-994396" scheduledPods=["default/task-pv-pod"]
	E1101 09:02:34.073597       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:02:34.074729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:02:36.875342       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:02:36.876752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:02:56.389544       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:02:56.390681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:03:12.851824       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:03:12.853113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:03:26.510182       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:03:26.511652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:03:31.415340       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:03:31.416876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:04:05.247677       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:04:05.248782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9] <==
	I1101 08:45:24.962819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:45:25.066839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:45:25.068064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1101 08:45:25.073313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:45:25.410848       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:45:25.410962       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:45:25.410991       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:45:25.477946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:45:25.478244       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:45:25.478277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:45:25.484125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:45:25.484405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:45:25.491275       1 config.go:200] "Starting service config controller"
	I1101 08:45:25.491309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:45:25.494813       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:45:25.496161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:45:25.495379       1 config.go:309] "Starting node config controller"
	I1101 08:45:25.506423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:45:25.506433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:45:25.584706       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:45:25.592170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:45:25.598016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5] <==
	E1101 08:45:15.349728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:45:15.349881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:45:15.352076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:15.352119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:15.352139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:15.352358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:45:15.352409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:15.357367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:15.357513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:15.357652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:45:16.203110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:45:16.263373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:16.299073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:16.424658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:16.486112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:16.556670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:16.568573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:16.598275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:45:16.651957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:16.662617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:45:16.674245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:16.759792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:45:19.143863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1101 09:02:24.980285       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-994396"
	E1101 09:02:24.980563       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	
	
	==> kubelet <==
	Nov 01 09:03:43 addons-994396 kubelet[1497]: E1101 09:03:43.971273    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="9c49ac5d-18e5-470b-9217-c0a58f0636a1"
	Nov 01 09:03:46 addons-994396 kubelet[1497]: I1101 09:03:46.970484    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vssmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:03:48 addons-994396 kubelet[1497]: E1101 09:03:48.525571    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987828525168457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 09:03:48 addons-994396 kubelet[1497]: E1101 09:03:48.525619    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987828525168457  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 09:03:49 addons-994396 kubelet[1497]: E1101 09:03:49.970428    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.528720    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987838528219488  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.528779    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987838528219488  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.889230    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.889308    1497 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.890164    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e_local-path-storage(c2c6242b-10ca-4397-9d00-1e3f0d0aa51b): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:03:58 addons-994396 kubelet[1497]: E1101 09:03:58.890253    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" podUID="c2c6242b-10ca-4397-9d00-1e3f0d0aa51b"
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.782645    1497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f4hx\" (UniqueName: \"kubernetes.io/projected/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-kube-api-access-4f4hx\") pod \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\" (UID: \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\") "
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.782703    1497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-script\") pod \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\" (UID: \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\") "
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.782723    1497 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-data\") pod \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\" (UID: \"c2c6242b-10ca-4397-9d00-1e3f0d0aa51b\") "
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.782813    1497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-data" (OuterVolumeSpecName: "data") pod "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b" (UID: "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.783241    1497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-script" (OuterVolumeSpecName: "script") pod "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b" (UID: "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.786475    1497 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-kube-api-access-4f4hx" (OuterVolumeSpecName: "kube-api-access-4f4hx") pod "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b" (UID: "c2c6242b-10ca-4397-9d00-1e3f0d0aa51b"). InnerVolumeSpecName "kube-api-access-4f4hx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.883162    1497 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4f4hx\" (UniqueName: \"kubernetes.io/projected/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-kube-api-access-4f4hx\") on node \"addons-994396\" DevicePath \"\""
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.883198    1497 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-script\") on node \"addons-994396\" DevicePath \"\""
	Nov 01 09:03:59 addons-994396 kubelet[1497]: I1101 09:03:59.883212    1497 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b-data\") on node \"addons-994396\" DevicePath \"\""
	Nov 01 09:04:00 addons-994396 kubelet[1497]: E1101 09:04:00.970236    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 09:04:01 addons-994396 kubelet[1497]: I1101 09:04:01.974534    1497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c6242b-10ca-4397-9d00-1e3f0d0aa51b" path="/var/lib/kubelet/pods/c2c6242b-10ca-4397-9d00-1e3f0d0aa51b/volumes"
	Nov 01 09:04:04 addons-994396 kubelet[1497]: W1101 09:04:04.816176    1497 logging.go:55] [core] [Channel #71 SubChannel #72]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Nov 01 09:04:08 addons-994396 kubelet[1497]: E1101 09:04:08.531756    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987848531348298  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 09:04:08 addons-994396 kubelet[1497]: E1101 09:04:08.531806    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987848531348298  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	
	
	==> storage-provisioner [2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3] <==
	W1101 09:03:45.431773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:47.434998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:47.443082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:49.447386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:49.452704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:51.456690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:51.461881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:53.466584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:53.472076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:55.475471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:55.481931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:57.485597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:57.495087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:59.500152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:03:59.505216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:01.508748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:01.516536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:03.520248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:03.525495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:05.529103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:05.535203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:07.539159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:07.545226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:09.549280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:04:09.557019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
helpers_test.go:269: (dbg) Run:  kubectl --context addons-994396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6: exit status 1 (92.256929ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:56:09 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlw58 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rlw58:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-994396
	  Warning  Failed     4m                    kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     103s (x3 over 6m45s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     103s (x4 over 6m45s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    28s (x10 over 6m44s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     28s (x10 over 6m44s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    16s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mngk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           11m                   default-scheduler        Successfully assigned default/task-pv-pod to addons-994396
	  Warning  Failed              6m1s (x2 over 7m46s)  kubelet                  Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling             2m9s (x5 over 11m)    kubelet                  Pulling image "docker.io/nginx"
	  Warning  FailedAttachVolume  110s                  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-fff0d080-eca7-4751-8bfb-428597d20c3b" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume 20f8360d-b700-11f0-8520-5a386f3b776b
	  Warning  Failed              74s (x3 over 10m)     kubelet                  Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              74s (x5 over 10m)     kubelet                  Error: ErrImagePull
	  Normal   BackOff             12s (x14 over 10m)    kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              12s (x14 over 10m)    kubelet                  Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65r97 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-65r97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6ptqs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dmt9r" not found
	Error from server (NotFound): pods "registry-6b586f9694-b4ph6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 addons disable ingress-dns --alsologtostderr -v=1: (1.702584665s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 addons disable ingress --alsologtostderr -v=1: (7.784261424s)
--- FAIL: TestAddons/parallel/Ingress (492.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (377.14s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 08:52:37.816621  534515 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 08:52:37.823629  534515 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:52:37.823664  534515 kapi.go:107] duration metric: took 7.062233ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.078031ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-994396 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-994396 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8623da74-791e-4fd6-a974-60ebca5738a7] Pending
helpers_test.go:352: "task-pv-pod" [8623da74-791e-4fd6-a974-60ebca5738a7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-01 08:58:44.36800249 +0000 UTC m=+859.394679726
addons_test.go:567: (dbg) Run:  kubectl --context addons-994396 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-994396 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-994396/192.168.39.195
Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-mngk2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-994396
Warning  Failed     5m18s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    112s (x3 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     33s (x3 over 5m18s)  kubelet            Error: ErrImagePull
Warning  Failed     33s (x2 over 2m18s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    5s (x4 over 5m18s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     5s (x4 over 5m18s)   kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-994396 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-994396 logs task-pv-pod -n default: exit status 1 (68.177954ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-994396 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-994396 -n addons-994396
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 logs -n 25: (1.485689176s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ -p binary-mirror-775538                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ start   │ -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:51 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ enable headlamp -p addons-994396 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:54 UTC │ 01 Nov 25 08:56 UTC │
	│ addons  │ addons-994396 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                         │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:57 UTC │ 01 Nov 25 08:57 UTC │
	│ addons  │ addons-994396 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:38.415244  535088 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:38.415511  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415520  535088 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:38.415525  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415722  535088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:38.416292  535088 out.go:368] Setting JSON to false
	I1101 08:44:38.417206  535088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62800,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:38.417275  535088 start.go:143] virtualization: kvm guest
	I1101 08:44:38.419180  535088 out.go:179] * [addons-994396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:38.420576  535088 notify.go:221] Checking for updates...
	I1101 08:44:38.420602  535088 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:44:38.422388  535088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:38.423762  535088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:38.425054  535088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:38.426433  535088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:44:38.427613  535088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:44:38.429086  535088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:38.459669  535088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:44:38.460716  535088 start.go:309] selected driver: kvm2
	I1101 08:44:38.460736  535088 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:38.460750  535088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:44:38.461509  535088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:38.461750  535088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:44:38.461788  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:44:38.461839  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:38.461847  535088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:38.461887  535088 start.go:353] cluster config:
	{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:44:38.462012  535088 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:38.463350  535088 out.go:179] * Starting "addons-994396" primary control-plane node in "addons-994396" cluster
	I1101 08:44:38.464523  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:44:38.464559  535088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:38.464570  535088 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:38.464648  535088 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:44:38.464659  535088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:44:38.464982  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:38.465015  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json: {Name:mk89a75531523cc17e10cf65ac144e466baef6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:44:38.465175  535088 start.go:360] acquireMachinesLock for addons-994396: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:44:38.465227  535088 start.go:364] duration metric: took 38.791µs to acquireMachinesLock for "addons-994396"
	I1101 08:44:38.465244  535088 start.go:93] Provisioning new machine with config: &{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:44:38.465309  535088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:44:38.467651  535088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:44:38.467824  535088 start.go:159] libmachine.API.Create for "addons-994396" (driver="kvm2")
	I1101 08:44:38.467852  535088 client.go:173] LocalClient.Create starting
	I1101 08:44:38.467960  535088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 08:44:38.525135  535088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 08:44:38.966403  535088 main.go:143] libmachine: creating domain...
	I1101 08:44:38.966427  535088 main.go:143] libmachine: creating network...
	I1101 08:44:38.968049  535088 main.go:143] libmachine: found existing default network
	I1101 08:44:38.968268  535088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.968754  535088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9b7d0}
	I1101 08:44:38.968919  535088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-994396</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.974811  535088 main.go:143] libmachine: creating private network mk-addons-994396 192.168.39.0/24...
	I1101 08:44:39.051181  535088 main.go:143] libmachine: private network mk-addons-994396 192.168.39.0/24 created
	I1101 08:44:39.051459  535088 main.go:143] libmachine: <network>
	  <name>mk-addons-994396</name>
	  <uuid>960ab3a9-e2ba-413f-8b77-ff4745b036d0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3e:a3:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:39.051486  535088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.051511  535088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:39.051536  535088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.051601  535088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:44:39.334278  535088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa...
	I1101 08:44:39.562590  535088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk...
	I1101 08:44:39.562642  535088 main.go:143] libmachine: Writing magic tar header
	I1101 08:44:39.562674  535088 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:44:39.562773  535088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.562837  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396
	I1101 08:44:39.562920  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 (perms=drwx------)
	I1101 08:44:39.562944  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 08:44:39.562958  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:44:39.562977  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.562988  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 08:44:39.562999  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 08:44:39.563010  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 08:44:39.563022  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:44:39.563032  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:44:39.563043  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:44:39.563053  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:44:39.563063  535088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:44:39.563072  535088 main.go:143] libmachine: skipping /home - not owner
	I1101 08:44:39.563079  535088 main.go:143] libmachine: defining domain...
	I1101 08:44:39.564528  535088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:39.569846  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:73:0a:92 in network default
	I1101 08:44:39.570479  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:39.570497  535088 main.go:143] libmachine: starting domain...
	I1101 08:44:39.570501  535088 main.go:143] libmachine: ensuring networks are active...
	I1101 08:44:39.571361  535088 main.go:143] libmachine: Ensuring network default is active
	I1101 08:44:39.571760  535088 main.go:143] libmachine: Ensuring network mk-addons-994396 is active
	I1101 08:44:39.572463  535088 main.go:143] libmachine: getting domain XML...
	I1101 08:44:39.574016  535088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <uuid>47158355-a959-4cbf-84ea-23a10000597a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2a:d2:e3'/>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:73:0a:92'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:40.850976  535088 main.go:143] libmachine: waiting for domain to start...
	I1101 08:44:40.852401  535088 main.go:143] libmachine: domain is now running
	I1101 08:44:40.852417  535088 main.go:143] libmachine: waiting for IP...
	I1101 08:44:40.853195  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:40.853985  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:40.853994  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:40.854261  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:40.854309  535088 retry.go:31] will retry after 216.262446ms: waiting for domain to come up
	I1101 08:44:41.071837  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.072843  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.072862  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.073274  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.073319  535088 retry.go:31] will retry after 360.302211ms: waiting for domain to come up
	I1101 08:44:41.434879  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.435804  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.435822  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.436172  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.436214  535088 retry.go:31] will retry after 371.777554ms: waiting for domain to come up
	I1101 08:44:41.809947  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.810703  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.810722  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.811072  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.811112  535088 retry.go:31] will retry after 462.843758ms: waiting for domain to come up
	I1101 08:44:42.275984  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.276618  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.276637  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.276993  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.277037  535088 retry.go:31] will retry after 560.265466ms: waiting for domain to come up
	I1101 08:44:42.838931  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.839781  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.839798  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.840224  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.840268  535088 retry.go:31] will retry after 839.411139ms: waiting for domain to come up
	I1101 08:44:43.681040  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:43.681790  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:43.681802  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:43.682192  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:43.682243  535088 retry.go:31] will retry after 1.099878288s: waiting for domain to come up
	I1101 08:44:44.783686  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:44.784502  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:44.784521  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:44.784840  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:44.784888  535088 retry.go:31] will retry after 1.052374717s: waiting for domain to come up
	I1101 08:44:45.839257  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:45.839889  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:45.839926  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:45.840243  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:45.840284  535088 retry.go:31] will retry after 1.704542625s: waiting for domain to come up
	I1101 08:44:47.547411  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:47.548205  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:47.548225  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:47.548588  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:47.548630  535088 retry.go:31] will retry after 1.752267255s: waiting for domain to come up
	I1101 08:44:49.302359  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:49.303199  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:49.303210  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:49.303522  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:49.303559  535088 retry.go:31] will retry after 2.861627149s: waiting for domain to come up
	I1101 08:44:52.168696  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:52.169368  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:52.169385  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:52.169681  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:52.169738  535088 retry.go:31] will retry after 2.277819072s: waiting for domain to come up
	I1101 08:44:54.449193  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:54.449957  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:54.449978  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:54.450273  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:54.450316  535088 retry.go:31] will retry after 3.87405165s: waiting for domain to come up
	I1101 08:44:58.329388  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330073  535088 main.go:143] libmachine: domain addons-994396 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330089  535088 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1101 08:44:58.330096  535088 main.go:143] libmachine: reserving static IP address...
	I1101 08:44:58.330490  535088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-994396", mac: "52:54:00:2a:d2:e3", ip: "192.168.39.195"} in network mk-addons-994396
	I1101 08:44:58.532247  535088 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-994396
	I1101 08:44:58.532270  535088 main.go:143] libmachine: waiting for SSH...
	I1101 08:44:58.532276  535088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:44:58.535646  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536214  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.536242  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536445  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.536737  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.536748  535088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:44:58.655800  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:58.656194  535088 main.go:143] libmachine: domain creation complete
	I1101 08:44:58.657668  535088 machine.go:94] provisionDockerMachine start ...
	I1101 08:44:58.660444  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.660857  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.660881  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.661078  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.661273  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.661283  535088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:44:58.781217  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:44:58.781253  535088 buildroot.go:166] provisioning hostname "addons-994396"
	I1101 08:44:58.784387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784787  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.784821  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784992  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.785186  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.785198  535088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-994396 && echo "addons-994396" | sudo tee /etc/hostname
	I1101 08:44:58.921865  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-994396
	
	I1101 08:44:58.924651  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925106  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.925158  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925363  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.925623  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.925647  535088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-994396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-994396/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-994396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:44:59.053021  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:59.053062  535088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 08:44:59.053121  535088 buildroot.go:174] setting up certificates
	I1101 08:44:59.053134  535088 provision.go:84] configureAuth start
	I1101 08:44:59.056039  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.056491  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.056527  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059390  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059768  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.059793  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059971  535088 provision.go:143] copyHostCerts
	I1101 08:44:59.060039  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 08:44:59.060157  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 08:44:59.060215  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 08:44:59.060262  535088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.addons-994396 san=[127.0.0.1 192.168.39.195 addons-994396 localhost minikube]
	I1101 08:44:59.098818  535088 provision.go:177] copyRemoteCerts
	I1101 08:44:59.098909  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:44:59.101492  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.101853  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.101876  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.102044  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.192919  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:44:59.224374  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:44:59.254587  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:44:59.285112  535088 provision.go:87] duration metric: took 231.963697ms to configureAuth
	I1101 08:44:59.285151  535088 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:44:59.285333  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:44:59.288033  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288440  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.288461  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288660  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.288854  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.288872  535088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:44:59.552498  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:44:59.552535  535088 machine.go:97] duration metric: took 894.848438ms to provisionDockerMachine
	I1101 08:44:59.552551  535088 client.go:176] duration metric: took 21.084691653s to LocalClient.Create
	I1101 08:44:59.552575  535088 start.go:167] duration metric: took 21.084749844s to libmachine.API.Create "addons-994396"
	I1101 08:44:59.552585  535088 start.go:293] postStartSetup for "addons-994396" (driver="kvm2")
	I1101 08:44:59.552598  535088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:44:59.552698  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:44:59.555985  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556410  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.556446  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556594  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.646378  535088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:44:59.651827  535088 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:44:59.651860  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 08:44:59.652002  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 08:44:59.652045  535088 start.go:296] duration metric: took 99.451778ms for postStartSetup
	I1101 08:44:59.655428  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.655951  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.655983  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.656303  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:59.656524  535088 start.go:128] duration metric: took 21.191204758s to createHost
	I1101 08:44:59.659225  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659662  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.659688  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659918  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.660165  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.660179  535088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:44:59.778959  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761986699.744832808
	
	I1101 08:44:59.778992  535088 fix.go:216] guest clock: 1761986699.744832808
	I1101 08:44:59.779003  535088 fix.go:229] Guest: 2025-11-01 08:44:59.744832808 +0000 UTC Remote: 2025-11-01 08:44:59.656538269 +0000 UTC m=+21.291332648 (delta=88.294539ms)
	I1101 08:44:59.779025  535088 fix.go:200] guest clock delta is within tolerance: 88.294539ms
	I1101 08:44:59.779033  535088 start.go:83] releasing machines lock for "addons-994396", held for 21.31379566s
	I1101 08:44:59.782561  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783052  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.783085  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783744  535088 ssh_runner.go:195] Run: cat /version.json
	I1101 08:44:59.783923  535088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:44:59.786949  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787338  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.787364  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787467  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787547  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.788054  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.788100  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.788306  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.898855  535088 ssh_runner.go:195] Run: systemctl --version
	I1101 08:44:59.905749  535088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:45:00.064091  535088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:45:00.072201  535088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:45:00.072263  535088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:45:00.092562  535088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:45:00.092584  535088 start.go:496] detecting cgroup driver to use...
	I1101 08:45:00.092661  535088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:45:00.112010  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:45:00.129164  535088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:45:00.129222  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:45:00.147169  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:45:00.164876  535088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:45:00.317011  535088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:45:00.521291  535088 docker.go:234] disabling docker service ...
	I1101 08:45:00.521377  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:45:00.537927  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:45:00.552544  535088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:45:00.714401  535088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:45:00.855387  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:45:00.871802  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:45:00.895848  535088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:45:00.895969  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.908735  535088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:45:00.908831  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.924244  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.938467  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.951396  535088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:45:00.965054  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.977595  535088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.998868  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:01.011547  535088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:45:01.022709  535088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:45:01.022775  535088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:45:01.044963  535088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:45:01.057499  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:01.203336  535088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:45:01.311792  535088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:45:01.311884  535088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:45:01.317453  535088 start.go:564] Will wait 60s for crictl version
	I1101 08:45:01.317538  535088 ssh_runner.go:195] Run: which crictl
	I1101 08:45:01.321986  535088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:45:01.367266  535088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:45:01.367363  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.398127  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.431424  535088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:45:01.435939  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436441  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:01.436471  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436732  535088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:45:01.441662  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:01.457635  535088 kubeadm.go:884] updating cluster {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:45:01.457753  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:45:01.457802  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:01.495090  535088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:45:01.495193  535088 ssh_runner.go:195] Run: which lz4
	I1101 08:45:01.500348  535088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:45:01.506036  535088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:45:01.506082  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:45:03.083875  535088 crio.go:462] duration metric: took 1.583585669s to copy over tarball
	I1101 08:45:03.084036  535088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:45:04.665932  535088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.581842537s)
	I1101 08:45:04.665965  535088 crio.go:469] duration metric: took 1.582007439s to extract the tarball
	I1101 08:45:04.665976  535088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:45:04.707682  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:04.751036  535088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:45:04.751073  535088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:45:04.751085  535088 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1101 08:45:04.751212  535088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-994396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:45:04.751302  535088 ssh_runner.go:195] Run: crio config
	I1101 08:45:04.801702  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:04.801733  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:04.801758  535088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:45:04.801791  535088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-994396 NodeName:addons-994396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:45:04.801978  535088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-994396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:45:04.802066  535088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:45:04.814571  535088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:45:04.814653  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:45:04.826605  535088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:45:04.846937  535088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:45:04.868213  535088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:45:04.888962  535088 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1101 08:45:04.893299  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:04.908547  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:05.049704  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:05.081089  535088 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396 for IP: 192.168.39.195
	I1101 08:45:05.081124  535088 certs.go:195] generating shared ca certs ...
	I1101 08:45:05.081146  535088 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.081312  535088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 08:45:05.135626  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt ...
	I1101 08:45:05.135669  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt: {Name:mk42d9a91568201fc7bb838317bb109a9d557e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.135920  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key ...
	I1101 08:45:05.135935  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key: {Name:mk8868035ca874da4b6bcd8361c76f97522a09dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.136031  535088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 08:45:05.223112  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt ...
	I1101 08:45:05.223159  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt: {Name:mk17c24c1e5b8188202459729e4a5c2f9a4008a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223343  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key ...
	I1101 08:45:05.223356  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key: {Name:mk64bb220f00b339bafb0b18442258c31c6af7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223432  535088 certs.go:257] generating profile certs ...
	I1101 08:45:05.223509  535088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key
	I1101 08:45:05.223524  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt with IP's: []
	I1101 08:45:05.791770  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt ...
	I1101 08:45:05.791805  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: {Name:mk739df015c10897beee55b57aac6a9687c49aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.791993  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key ...
	I1101 08:45:05.792008  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key: {Name:mk22e303787fbf3b8945b47ac917db338129138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.792086  535088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58
	I1101 08:45:05.792105  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1101 08:45:05.964688  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 ...
	I1101 08:45:05.964721  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58: {Name:mkc85c65639cbe37cb2f18c20238504fe651c568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964892  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 ...
	I1101 08:45:05.964917  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58: {Name:mk0a07f1288d6c9ced8ef2d4bb53cbfce6f3c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964998  535088 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt
	I1101 08:45:05.965075  535088 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key
	I1101 08:45:05.965124  535088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key
	I1101 08:45:05.965142  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt with IP's: []
	I1101 08:45:06.097161  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt ...
	I1101 08:45:06.097197  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt: {Name:mke456d45c85355b327c605777e7e939bd178f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097374  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key ...
	I1101 08:45:06.097388  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key: {Name:mk96b8f9598bf40057b4d6b2c6e97a30a363b3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097558  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:45:06.097602  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:45:06.097627  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:45:06.097651  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 08:45:06.098363  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:45:06.130486  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:45:06.160429  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:45:06.189962  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:45:06.219452  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:45:06.250552  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:45:06.282860  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:45:06.313986  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:45:06.344383  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:45:06.376611  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:45:06.399751  535088 ssh_runner.go:195] Run: openssl version
	I1101 08:45:06.406933  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:45:06.421716  535088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427410  535088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427478  535088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.435363  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:45:06.449854  535088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:45:06.455299  535088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:45:06.455368  535088 kubeadm.go:401] StartCluster: {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:45:06.455464  535088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:45:06.455528  535088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:45:06.499318  535088 cri.go:89] found id: ""
	I1101 08:45:06.499395  535088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:45:06.513696  535088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:45:06.527370  535088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:45:06.541099  535088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:45:06.541122  535088 kubeadm.go:158] found existing configuration files:
	
	I1101 08:45:06.541170  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:45:06.553610  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:45:06.553677  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:45:06.567384  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:45:06.580377  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:45:06.580444  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:45:06.593440  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.605393  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:45:06.605460  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.618978  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:45:06.631411  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:45:06.631487  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:45:06.645452  535088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:45:06.719122  535088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:45:06.719190  535088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:45:06.829004  535088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:45:06.829160  535088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:45:06.829291  535088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:45:06.841691  535088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:45:06.866137  535088 out.go:252]   - Generating certificates and keys ...
	I1101 08:45:06.866269  535088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:45:06.866364  535088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:45:07.164883  535088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:45:07.767615  535088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:45:08.072088  535088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:45:08.514870  535088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:45:08.646331  535088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:45:08.646504  535088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.781122  535088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:45:08.781335  535088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.899420  535088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:45:09.007181  535088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:45:09.224150  535088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:45:09.224224  535088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:45:09.511033  535088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:45:09.752693  535088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:45:09.819463  535088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:45:10.005082  535088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:45:10.463552  535088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:45:10.464025  535088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:45:10.466454  535088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:45:10.471575  535088 out.go:252]   - Booting up control plane ...
	I1101 08:45:10.471714  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:45:10.471809  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:45:10.471913  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:45:10.490781  535088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:45:10.491002  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:45:10.498306  535088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:45:10.498812  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:45:10.498893  535088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:45:10.686796  535088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:45:10.686991  535088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:45:11.697343  535088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.005207328s
	I1101 08:45:11.699752  535088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:45:11.699949  535088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1101 08:45:11.700150  535088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:45:11.704134  535088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:45:13.981077  535088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.280860487s
	I1101 08:45:15.371368  535088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.67283221s
	I1101 08:45:17.198417  535088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501722237s
	I1101 08:45:17.211581  535088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:45:17.231075  535088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:45:17.253882  535088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:45:17.254137  535088 kubeadm.go:319] [mark-control-plane] Marking the node addons-994396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:45:17.268868  535088 kubeadm.go:319] [bootstrap-token] Using token: f9fr0l.j77e5jevkskl9xb5
	I1101 08:45:17.270121  535088 out.go:252]   - Configuring RBAC rules ...
	I1101 08:45:17.270326  535088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:45:17.277792  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:45:17.293695  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:45:17.296955  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:45:17.300284  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:45:17.303890  535088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:45:17.605222  535088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:45:18.065761  535088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:45:18.604676  535088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:45:18.605674  535088 kubeadm.go:319] 
	I1101 08:45:18.605802  535088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:45:18.605830  535088 kubeadm.go:319] 
	I1101 08:45:18.605992  535088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:45:18.606023  535088 kubeadm.go:319] 
	I1101 08:45:18.606068  535088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:45:18.606156  535088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:45:18.606234  535088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:45:18.606243  535088 kubeadm.go:319] 
	I1101 08:45:18.606321  535088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:45:18.606330  535088 kubeadm.go:319] 
	I1101 08:45:18.606402  535088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:45:18.606415  535088 kubeadm.go:319] 
	I1101 08:45:18.606489  535088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:45:18.606605  535088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:45:18.606702  535088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:45:18.606712  535088 kubeadm.go:319] 
	I1101 08:45:18.606815  535088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:45:18.606947  535088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:45:18.606965  535088 kubeadm.go:319] 
	I1101 08:45:18.607067  535088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607196  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 08:45:18.607233  535088 kubeadm.go:319] 	--control-plane 
	I1101 08:45:18.607244  535088 kubeadm.go:319] 
	I1101 08:45:18.607366  535088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:45:18.607389  535088 kubeadm.go:319] 
	I1101 08:45:18.607497  535088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607642  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 08:45:18.609590  535088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:45:18.609615  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:18.609625  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:18.611467  535088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:45:18.612559  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:45:18.629659  535088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:45:18.653188  535088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:45:18.653266  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:18.653283  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-994396 minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-994396 minikube.k8s.io/primary=true
	I1101 08:45:18.823964  535088 ops.go:34] apiserver oom_adj: -16
	I1101 08:45:18.824003  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.324429  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.824169  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.324357  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.825065  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.324643  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.824929  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.325055  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.824179  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.324346  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.422037  535088 kubeadm.go:1114] duration metric: took 4.768840437s to wait for elevateKubeSystemPrivileges
	I1101 08:45:23.422092  535088 kubeadm.go:403] duration metric: took 16.966730014s to StartCluster
	I1101 08:45:23.422117  535088 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.422289  535088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:45:23.422848  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.423145  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:45:23.423170  535088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:45:23.423239  535088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:45:23.423378  535088 addons.go:70] Setting yakd=true in profile "addons-994396"
	I1101 08:45:23.423402  535088 addons.go:239] Setting addon yakd=true in "addons-994396"
	I1101 08:45:23.423420  535088 addons.go:70] Setting inspektor-gadget=true in profile "addons-994396"
	I1101 08:45:23.423440  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.423457  535088 addons.go:239] Setting addon inspektor-gadget=true in "addons-994396"
	I1101 08:45:23.423459  535088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423473  535088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-994396"
	I1101 08:45:23.423435  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423491  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423507  535088 addons.go:70] Setting registry=true in profile "addons-994396"
	I1101 08:45:23.423518  535088 addons.go:239] Setting addon registry=true in "addons-994396"
	I1101 08:45:23.423522  535088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423539  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423555  535088 addons.go:70] Setting cloud-spanner=true in profile "addons-994396"
	I1101 08:45:23.423568  535088 addons.go:239] Setting addon cloud-spanner=true in "addons-994396"
	I1101 08:45:23.423606  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423731  535088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-994396"
	I1101 08:45:23.423760  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-994396"
	I1101 08:45:23.424125  535088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-994396"
	I1101 08:45:23.424214  535088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:23.424248  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423443  535088 addons.go:70] Setting metrics-server=true in profile "addons-994396"
	I1101 08:45:23.424283  535088 addons.go:239] Setting addon metrics-server=true in "addons-994396"
	I1101 08:45:23.424313  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423545  535088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-994396"
	I1101 08:45:23.424411  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424496  535088 addons.go:70] Setting ingress=true in profile "addons-994396"
	I1101 08:45:23.423498  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424512  535088 addons.go:239] Setting addon ingress=true in "addons-994396"
	I1101 08:45:23.424544  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425045  535088 addons.go:70] Setting registry-creds=true in profile "addons-994396"
	I1101 08:45:23.425074  535088 addons.go:239] Setting addon registry-creds=true in "addons-994396"
	I1101 08:45:23.425105  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425174  535088 addons.go:70] Setting volcano=true in profile "addons-994396"
	I1101 08:45:23.425210  535088 addons.go:239] Setting addon volcano=true in "addons-994396"
	I1101 08:45:23.425245  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423474  535088 addons.go:70] Setting default-storageclass=true in profile "addons-994396"
	I1101 08:45:23.425528  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-994396"
	I1101 08:45:23.425555  535088 addons.go:70] Setting gcp-auth=true in profile "addons-994396"
	I1101 08:45:23.425587  535088 addons.go:70] Setting volumesnapshots=true in profile "addons-994396"
	I1101 08:45:23.425594  535088 mustload.go:66] Loading cluster: addons-994396
	I1101 08:45:23.425605  535088 addons.go:239] Setting addon volumesnapshots=true in "addons-994396"
	I1101 08:45:23.425629  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425759  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.426001  535088 addons.go:70] Setting storage-provisioner=true in profile "addons-994396"
	I1101 08:45:23.426034  535088 addons.go:239] Setting addon storage-provisioner=true in "addons-994396"
	I1101 08:45:23.426060  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.426263  535088 addons.go:70] Setting ingress-dns=true in profile "addons-994396"
	I1101 08:45:23.426312  535088 addons.go:239] Setting addon ingress-dns=true in "addons-994396"
	I1101 08:45:23.426349  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.428071  535088 out.go:179] * Verifying Kubernetes components...
	I1101 08:45:23.430376  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:23.432110  535088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:45:23.432211  535088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:45:23.432239  535088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:45:23.432548  535088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-994396"
	I1101 08:45:23.433347  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.433599  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:45:23.433622  535088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:45:23.434372  535088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:45:23.434372  535088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:45:23.434372  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:45:23.434399  535088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	W1101 08:45:23.434936  535088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:45:23.434947  535088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:45:23.434397  535088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:45:23.435739  535088 addons.go:239] Setting addon default-storageclass=true in "addons-994396"
	I1101 08:45:23.435133  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.435780  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:45:23.435569  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.436246  535088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:45:23.436291  535088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:23.437459  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:45:23.436270  535088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:23.437541  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:45:23.437032  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:45:23.437636  535088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:45:23.437844  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:45:23.437918  535088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:45:23.437851  535088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:45:23.437941  535088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:23.438856  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:45:23.437976  535088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:23.438988  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:45:23.439032  535088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:45:23.439073  535088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:45:23.439539  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:45:23.439090  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:45:23.439094  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:45:23.439317  535088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:23.439929  535088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:45:23.439932  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:45:23.439957  535088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:45:23.439990  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:23.440001  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:45:23.440144  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:23.440159  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:45:23.442297  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.442308  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:45:23.442298  535088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:45:23.443272  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443791  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443933  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:23.443957  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:45:23.444059  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:23.444083  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:45:23.444856  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.444941  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.445160  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:45:23.445705  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.446038  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.446083  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.446929  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.448105  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:45:23.448713  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.449090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450028  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450296  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450327  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450341  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450369  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450600  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:45:23.451017  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451085  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451162  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451241  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451823  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.451855  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452155  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452274  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452437  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.452519  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452542  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452550  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452567  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452769  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453008  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453181  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453204  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453341  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:45:23.453485  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453526  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453547  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453582  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453698  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453748  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453776  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453961  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454247  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454637  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454592  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454668  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454765  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454810  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454640  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454828  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454953  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455189  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455476  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455511  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455565  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455603  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455714  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455949  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:45:23.456005  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.457369  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:45:23.457390  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:45:23.460387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.460852  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.460874  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.461072  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	W1101 08:45:23.763758  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763807  535088 retry.go:31] will retry after 294.020846ms: ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	W1101 08:45:23.763891  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763941  535088 retry.go:31] will retry after 247.932093ms: ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.987612  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:23.987618  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:45:24.391549  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:45:24.391592  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:45:24.396118  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:24.428988  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:45:24.429026  535088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:45:24.539937  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:24.542018  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:24.551067  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:24.578439  535088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:45:24.578476  535088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:45:24.590870  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:24.593597  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:45:24.593630  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:45:24.648891  535088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:24.648945  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:45:24.654530  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:24.691639  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:24.775174  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:24.894476  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:45:24.894518  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:45:25.110719  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:45:25.110755  535088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:45:25.248567  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:45:25.248606  535088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:45:25.251834  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:25.279634  535088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.279661  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:45:25.282613  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:25.356642  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:45:25.356672  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:45:25.596573  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:45:25.596609  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:45:25.610846  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:45:25.610885  535088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:45:25.674735  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.705462  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:25.705495  535088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:45:25.746878  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:45:25.746929  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:45:25.925617  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:25.925645  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:45:25.996036  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:45:25.996070  535088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:45:26.051328  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:26.240447  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:45:26.240483  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:45:26.408185  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:26.436460  535088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:26.436501  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:45:26.557448  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:45:26.557481  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:45:26.856571  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:27.059648  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:45:27.059683  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:45:27.286113  535088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.298454996s)
	I1101 08:45:27.286197  535088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.298476587s)
	I1101 08:45:27.286240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.890088886s)
	I1101 08:45:27.286229  535088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:45:27.286918  535088 node_ready.go:35] waiting up to 6m0s for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312278  535088 node_ready.go:49] node "addons-994396" is "Ready"
	I1101 08:45:27.312325  535088 node_ready.go:38] duration metric: took 25.37676ms for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312346  535088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:45:27.312422  535088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:27.686576  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:45:27.686612  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:45:27.792267  535088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-994396" context rescaled to 1 replicas
	I1101 08:45:28.140990  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:45:28.141032  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:45:28.704311  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:45:28.704352  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:45:29.292401  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:45:29.292429  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:45:29.854708  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:29.854740  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:45:30.288568  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:30.575091  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033025614s)
	I1101 08:45:30.862016  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:45:30.865323  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.865769  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:30.865797  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.866047  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:31.632521  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:45:31.806924  535088 addons.go:239] Setting addon gcp-auth=true in "addons-994396"
	I1101 08:45:31.807015  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:31.809359  535088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:45:31.813090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814762  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:31.814801  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814989  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:33.008057  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.456928918s)
	I1101 08:45:33.008164  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.417239871s)
	I1101 08:45:33.008205  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.35364594s)
	I1101 08:45:33.008240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.316568456s)
	I1101 08:45:33.008302  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.233079465s)
	I1101 08:45:33.008386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.756527935s)
	I1101 08:45:33.008524  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725858558s)
	I1101 08:45:33.008553  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.333786806s)
	W1101 08:45:33.008563  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008566  535088 addons.go:480] Verifying addon registry=true in "addons-994396"
	I1101 08:45:33.008586  535088 retry.go:31] will retry after 241.480923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008638  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.957281467s)
	I1101 08:45:33.008733  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.600492861s)
	I1101 08:45:33.008738  535088 addons.go:480] Verifying addon metrics-server=true in "addons-994396"
	I1101 08:45:33.010227  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.470250108s)
	I1101 08:45:33.010253  535088 addons.go:480] Verifying addon ingress=true in "addons-994396"
	I1101 08:45:33.011210  535088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-994396 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:45:33.011218  535088 out.go:179] * Verifying registry addon...
	I1101 08:45:33.012250  535088 out.go:179] * Verifying ingress addon...
	I1101 08:45:33.014024  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:45:33.015512  535088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:45:33.051723  535088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:45:33.051749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.051812  535088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:45:33.051833  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:45:33.111540  535088 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 08:45:33.250325  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:33.619402  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.619673  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:33.847569  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.990948052s)
	I1101 08:45:33.847595  535088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.535150405s)
	I1101 08:45:33.847621  535088 api_server.go:72] duration metric: took 10.424417181s to wait for apiserver process to appear ...
	I1101 08:45:33.847629  535088 api_server.go:88] waiting for apiserver healthz status ...
	W1101 08:45:33.847626  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.847652  535088 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1101 08:45:33.847651  535088 retry.go:31] will retry after 218.125549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.908865  535088 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1101 08:45:33.910593  535088 api_server.go:141] control plane version: v1.34.1
	I1101 08:45:33.910629  535088 api_server.go:131] duration metric: took 62.993472ms to wait for apiserver health ...
	I1101 08:45:33.910638  535088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:45:33.979264  535088 system_pods.go:59] 17 kube-system pods found
	I1101 08:45:33.979341  535088 system_pods.go:61] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:33.979358  535088 system_pods.go:61] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979373  535088 system_pods.go:61] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979381  535088 system_pods.go:61] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:33.979388  535088 system_pods.go:61] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:33.979401  535088 system_pods.go:61] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:33.979413  535088 system_pods.go:61] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:33.979421  535088 system_pods.go:61] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:33.979431  535088 system_pods.go:61] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:33.979438  535088 system_pods.go:61] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:33.979452  535088 system_pods.go:61] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:33.979468  535088 system_pods.go:61] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:33.979480  535088 system_pods.go:61] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:33.979501  535088 system_pods.go:61] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:33.979512  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending
	I1101 08:45:33.979522  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:33.979531  535088 system_pods.go:61] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:33.979545  535088 system_pods.go:74] duration metric: took 68.899123ms to wait for pod list to return data ...
	I1101 08:45:33.979563  535088 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:45:34.005592  535088 default_sa.go:45] found service account: "default"
	I1101 08:45:34.005620  535088 default_sa.go:55] duration metric: took 26.049347ms for default service account to be created ...
	I1101 08:45:34.005631  535088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:45:34.029039  535088 system_pods.go:86] 17 kube-system pods found
	I1101 08:45:34.029088  535088 system_pods.go:89] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:34.029098  535088 system_pods.go:89] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029109  535088 system_pods.go:89] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029116  535088 system_pods.go:89] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:34.029123  535088 system_pods.go:89] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:34.029128  535088 system_pods.go:89] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:34.029139  535088 system_pods.go:89] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:34.029144  535088 system_pods.go:89] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:34.029150  535088 system_pods.go:89] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:34.029156  535088 system_pods.go:89] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:34.029165  535088 system_pods.go:89] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:34.029173  535088 system_pods.go:89] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:34.029184  535088 system_pods.go:89] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:34.029194  535088 system_pods.go:89] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:34.029202  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:45:34.029211  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:34.029232  535088 system_pods.go:89] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:34.029244  535088 system_pods.go:126] duration metric: took 23.605903ms to wait for k8s-apps to be running ...
	I1101 08:45:34.029259  535088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:45:34.029328  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:34.057589  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.060041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:34.066143  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:34.536703  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.540613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.033279  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.057492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.517382  535088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.707985766s)
	I1101 08:45:35.519009  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:35.519008  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.230381443s)
	I1101 08:45:35.519151  535088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:35.520249  535088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:45:35.521386  535088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:45:35.522322  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:45:35.523075  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:45:35.523091  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:45:35.574185  535088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:45:35.574221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:35.574179  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.589220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.670403  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:45:35.670443  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:45:35.926227  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:35.926260  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:45:36.028744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.029084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.032411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:36.103812  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:36.521069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.523012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.530349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.024569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.026839  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.029801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.202891  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.952517264s)
	W1101 08:45:37.202946  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.202972  535088 retry.go:31] will retry after 301.106324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.203012  535088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.173650122s)
	I1101 08:45:37.203055  535088 system_svc.go:56] duration metric: took 3.173789622s WaitForService to wait for kubelet
	I1101 08:45:37.203071  535088 kubeadm.go:587] duration metric: took 13.779865062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:45:37.203102  535088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:45:37.208388  535088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:45:37.208416  535088 node_conditions.go:123] node cpu capacity is 2
	I1101 08:45:37.208429  535088 node_conditions.go:105] duration metric: took 5.320357ms to run NodePressure ...
	I1101 08:45:37.208441  535088 start.go:242] waiting for startup goroutines ...
	I1101 08:45:37.368099  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.301889566s)
	I1101 08:45:37.504488  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:37.521079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.521246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.528201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.991386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887518439s)
	I1101 08:45:37.992795  535088 addons.go:480] Verifying addon gcp-auth=true in "addons-994396"
	I1101 08:45:37.995595  535088 out.go:179] * Verifying gcp-auth addon...
	I1101 08:45:37.997651  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:45:38.013086  535088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:45:38.013118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.028095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.030768  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.041146  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:38.502928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.520170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.521930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.526766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.004207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.019028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.024223  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.031869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.206009  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.701470957s)
	W1101 08:45:39.206061  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.206085  535088 retry.go:31] will retry after 556.568559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.503999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.527340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.763081  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:40.006287  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.021411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.025825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.028609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:40.507622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.523293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.527164  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.530886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.005619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.021779  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.023058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.028879  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.134842  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371696885s)
	W1101 08:45:41.134889  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.134933  535088 retry.go:31] will retry after 634.404627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.501998  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.519483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.522699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.527571  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.769910  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:42.004958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.021144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.021931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.027195  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.501545  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.519865  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.522754  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.526903  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.775680  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00572246s)
	W1101 08:45:42.775745  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:42.775781  535088 retry.go:31] will retry after 1.084498807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:43.002944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.020356  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.020475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.134004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.504736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.519636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.520489  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.525810  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.861263  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:44.001829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.019292  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.026202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:44.503149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.520624  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.520651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.526211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:45:44.623495  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:44.623540  535088 retry.go:31] will retry after 1.856024944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:45.001600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.020242  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.022140  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.026024  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:45.507084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.523761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.524237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.529475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.005033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.108846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.109151  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.109369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.479732  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:46.503499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.520286  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.526234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.529155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.019094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.023015  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.027997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.507760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.519999  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.524925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.528391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.666049  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186267383s)
	W1101 08:45:47.666140  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:47.666174  535088 retry.go:31] will retry after 4.139204607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:48.003042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.019125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.027235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:48.031596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.722743  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.727291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.727372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.727610  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.004382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.019147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.021814  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.026878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:49.504442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.517916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.520088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.001964  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.024108  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.024120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.029503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.504014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.523676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.527259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.529569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.002796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.022756  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.022985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.026836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.501595  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.523272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.526829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.530749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.806085  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:52.003559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.019381  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.019451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.027431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:52.504756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.522177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.526818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.531367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.001310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.018845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.024989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.029380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.104383  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.298241592s)
	W1101 08:45:53.104437  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.104469  535088 retry.go:31] will retry after 2.354213604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.521260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.521459  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.530531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.465678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.465798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.466036  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:54.466159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.562014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.562454  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.001120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.025479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.025582  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.026324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:55.460012  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:55.504349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.519300  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.521013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.527541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.002846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.025053  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.029411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.032019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.575604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.575734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.577952  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.577981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.753301  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.293228646s)
	W1101 08:45:56.753349  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:56.753376  535088 retry.go:31] will retry after 4.355574242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:57.006174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.021087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.023942  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.029154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:57.505515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.520197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.523156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.018201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.022518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.025296  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.505701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.524023  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.526483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.536508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.001410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.017471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.020442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.025457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.501507  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.519043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.520094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.525760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.001248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.017563  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.020984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.026549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.501281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.519844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.521324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.525700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.001953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.020105  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.020877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.025885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.110059  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:01.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.519377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.523178  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.526440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:01.845885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:01.845957  535088 retry.go:31] will retry after 7.871379914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:02.001335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.019157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.021487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.026236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:02.502141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.517119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.519718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.526453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.002138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.017025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.019806  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.026770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.502833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.520032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.520118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.526559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.064971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.065055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.068066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.068526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.502308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.520197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.521585  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.526046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.003330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.017484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.026496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.501222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.517839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.520724  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.001368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.019614  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.020124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.025568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.500972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.518736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.520211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.526135  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.002092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.018836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.020757  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.025238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.503063  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.517984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.519990  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.528565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.018162  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.020563  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.026357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.501444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.517337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.519389  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.525929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.002578  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.018521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.020246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.026866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.518157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.519720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.527087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.718336  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:10.004096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.021038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.021333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.027767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:10.413712  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.413760  535088 retry.go:31] will retry after 19.114067213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.517730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.520404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.526363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.002849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.019496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.019995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.026025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.501655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.518007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.521219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.525426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.000873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.017867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.020240  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.026060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.502263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.518472  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.526084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.002272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.025249  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.501457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.518992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.520857  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.526486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.000572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.019408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.020492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.025038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.501826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.518060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.520198  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.526075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.002744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.018115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.019636  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.025834  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.501625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.518152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.519669  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.525079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.001990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.021114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.022918  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.025425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.501061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.519200  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.519212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.525882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.002326  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.017673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.020197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.026945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.502364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.518476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.520804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.526128  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.004541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.020439  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.028122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.502479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.519387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.519499  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.003038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.019735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.020844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.027661  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.519280  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.001793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.018442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.019878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.501246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.520476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.520774  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.525872  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.018221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.019989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.025817  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.501814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.518070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.520290  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.526096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.002018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.019705  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.021053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.026071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.501728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.519405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.520617  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.001744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.019715  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.020644  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.025597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.502175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.519303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.520222  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.526675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.001582  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.018997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.020524  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.025085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.501770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.519601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.520468  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.525222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.002719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.018650  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.020825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.026802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.501690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.517716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.520832  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.525983  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.002212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.017751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.019488  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.025775  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.501873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.519825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.526640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.001148  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.019815  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.025796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.502066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.518977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.520625  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.527501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.000982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.018045  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.019539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.026321  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.502967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.517882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.520453  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.525074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.002093  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.019794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.021920  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.025114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.502294  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.517914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.519213  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.526478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.528534  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:30.001669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.023801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.027674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.029691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:30.252885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.252962  535088 retry.go:31] will retry after 26.857733331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.501958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.518713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.001425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.019226  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.020064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.501882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.518669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.519450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.526794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.001295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.018253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.026067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.501521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.520301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.522051  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.526250  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.003215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.018591  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.020188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.026759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.518399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.520442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.526258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.001781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.019409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.026569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.501910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.518388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.526549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.002205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.019931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.501124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.517626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.519260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.526635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.001556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.017651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.020209  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.026600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.501047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.520391  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.001745  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.017677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.019854  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.504677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.518518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.519504  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.527753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.020360  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.026665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.501370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.517442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.519287  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.525990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.017774  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.019461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.026372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.500859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.519797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.520622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.525917  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.001647  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.017652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.019113  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.025818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.518504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.520340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.526037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.002231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.017533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.019687  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.025641  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.501410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.518018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.527062  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.018556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.020009  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.501909  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.519346  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.521539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.525544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.003422  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.020340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.026621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.501787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.517772  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.520385  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.526006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.001729  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.018572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.020505  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.027512  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.500861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.517878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.519941  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.525966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.002733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.022017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.023425  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.027913  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.501505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.518036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.518304  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.526497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.000839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.018027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.020574  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.025140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.517267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.519576  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.002664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.019029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.020440  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.026307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.502751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.518532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.525668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.001531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.017987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.018860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.501993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.519439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.520680  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.525869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.003110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.020088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.020281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.518761  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.520450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.526669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.019111  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.020657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.025651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.501137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.519077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.519422  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.526050  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.002264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.017514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.020444  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.026653  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.501218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.517606  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.519711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.525538  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.019403  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.027381  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.501030  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.519679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.520880  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.525311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.002074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.017920  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.020689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.025485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.501565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.518005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.518985  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.525510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.018972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.501041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.519696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.520156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.526253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.003167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.017108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.020966  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.025536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.501588  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.519412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.520387  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.526801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.001703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.019805  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.025874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.501547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.518508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.519409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.527341  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.001269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.017737  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.019765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.026345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.111554  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:57.502821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.521781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.523859  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:57.837380  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:46:57.837579  535088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:46:58.002477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.017866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.019513  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.025873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:58.501877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.518871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.519700  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.525438  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.004488  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.026436  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.031423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.033704  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.508129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.521490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.521737  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.526781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.003739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.022791  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.022910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.026491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.501517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.517703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.518550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.528527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.010322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.026679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.030087  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.030397  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.502386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.517530  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.522260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.532240  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.002156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.022137  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.023086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.026049  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.504322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.519252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.523461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.528764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.004016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.019471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.021442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.026419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.504419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.520406  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.525550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.002462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.020193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.021462  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.501642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.517490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.519930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.526445  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.005197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.023123  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.029475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.502664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.518118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.520518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.002738  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.019575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.022744  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.026515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.502554  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.519943  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.521590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.526208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.004023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.019789  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.020273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.026416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.504157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.518612  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.520773  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.527827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.007295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.020757  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.024258  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.031878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.505225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.518839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.521622  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.525366  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.003369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.024660  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.024787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.029399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.502978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.520074  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.520999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.527832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.002118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.019490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.019688  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.026021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.502365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.517980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.519426  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.000763  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.017778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.019554  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.025361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.502621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.519369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.520248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.525881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.001298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.019652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.020408  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.026077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.506179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.518698  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.520608  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.525646  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.004165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.021172  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.026558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.502399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.517614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.520163  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.526224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.002692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.018788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.502451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.519291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.520395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.528734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.001583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.017574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.019594  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.027073  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.502087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.518165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.518856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.526691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.002848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.019225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.020564  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.025778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.518991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.520609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.525245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.001845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.019346  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.019684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.026396  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.502188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.517746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.520856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.525856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.001858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.021348  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.026925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.517522  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.520124  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.525853  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.001850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.019071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.020953  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.502259  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.517542  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.520882  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.526825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.001558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.018927  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.020008  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.025511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.501320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.517732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.519487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.526814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.001370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.018101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.019530  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.501703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.519684  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.526074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.001809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.019534  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.025673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.501888  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.520695  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.521501  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.527625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.001636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.017676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.019410  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.026546  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.001469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.018681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.026297  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.500658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.517656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.520275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.526953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.002390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.018753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.021470  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.026724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.503080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.522083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.525703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.001480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.018730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.019775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.025922  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.501850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.518460  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.520597  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.526270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.002686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.017503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.019988  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.026061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.501773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.519208  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.519306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.526944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.001885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.020961  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.026254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.519603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.521180  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.526295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.003607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.018630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.021082  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.026312  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.501919  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.519736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.002036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.018828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.502329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.517607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.520177  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.527152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.003066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.020280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.020496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.026046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.503011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.519101  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.520154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.525819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.001349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.017760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.020383  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.026548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.501020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.519372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.520621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.001939  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.017981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.018721  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.025389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.502684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.519286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.519798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.526360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.001915  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.018089  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.018866  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.025884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.502109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.518315  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.520992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.001980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.020058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.020195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.502513  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.519131  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.519364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.526431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.001532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.017633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.019879  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.501267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.517441  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.519775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.526367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.002311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.017625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.025830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.502486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.518494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.519337  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.526256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.002200  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.017679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.020437  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.025635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.502121  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.518742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.519609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.525528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.001668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.017868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.019195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.027138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.502726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.518837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.519527  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.525448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.037966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.038824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.039617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.039888  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.510995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.611235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.611494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.612020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.007852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.104319  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.105167  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.106241  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.503207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.519701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.523717  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.528111  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.002832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.019368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.026027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.028968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.504592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.518781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.522913  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.527017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.021540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.022732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.027733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.501969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.523064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.523122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.526723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.016033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.048228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.048288  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.049707  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.510334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.517005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.520734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.527760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.002493  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.025067  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.025090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.030831  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.503106  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.519233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.522740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.526357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.003368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.021702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.023084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.025372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.507201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.528398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.528540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.528597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.005313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.021521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.023522  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.030205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.508306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.517975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.523254  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.528801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.004599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.018025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.024054  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.030295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.504150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.519937  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.527633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.003426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.021317  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.104457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.105285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.520941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.521038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.525762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.002168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.018353  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.019606  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.025332  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.501342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.518265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.520375  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.001482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.018509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.018674  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.026149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.502439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.518320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.519717  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.525114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.019121  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.026265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.501713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.525722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.001345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.020275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.025637  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.518670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.520663  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.525659  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.001263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.019116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.025335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.502071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.519000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.519010  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.525456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.021189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.026699  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.502333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.517379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.519350  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.526773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.001599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.018008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.020215  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.025828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.501455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.517521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.519235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.527201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.001827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.020037  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.020749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.025827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.503759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.517849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.520371  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.526800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.002360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.017843  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.026527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.501394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.520352  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.525725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.002102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.017074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.020520  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.026683  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.502383  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.517821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.520938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.525444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.004519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.104585  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.104625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.104775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.501109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.518462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.519031  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.018255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.019640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.025291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.503231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.518634  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.520274  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.526356  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.002389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.018529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.019411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.026657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.501043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.518076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.519080  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.526504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.001361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.019762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.022333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.025239  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.501714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.519163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.521149  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.526410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.000747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.019676  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.020330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.026159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.502467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.518491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.518845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.525769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.001664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.019454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.019620  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.027022  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.502850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.518666  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.520316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.526009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.002470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.017750  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.019816  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.025697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.501760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.519481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.519738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.525711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.001752  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.017749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.019804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.025660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.501792  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.519794  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.525244  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.002742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.018517  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.020369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.026630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.501587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.518305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.519219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.526380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.000977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.018805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.019761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.501890  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.517987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.520782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.525601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.001949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.018921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.020592  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.026413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.501660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.518677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.518948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.525564  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.017692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.019759  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.025724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.503245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.519474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.520078  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.525649  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.002655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.017994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.020743  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.025544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.500866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.519004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.520797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.527102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.001891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.019380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.020948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.025584  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.502039  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.519170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.520827  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.002597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.018456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.019344  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.501808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.518199  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.520114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.000809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.017935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.019860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.026010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.502293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.517549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.520189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.603271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.001815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.018392  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.020440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.025577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.501456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.517675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.519938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.525413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.000943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.017838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.021846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.026719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.502498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.518370  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.526307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.002824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.019386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.027193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.501577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.518262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.525078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.002037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.020156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.021197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.025423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.501921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.519607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.520544  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.001960  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.018434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.020315  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.026179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.518911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.520556  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.525269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.002029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.024168  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.026997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.031803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.502358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.518786  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.525830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.017338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.018324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.503054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.520388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.521916  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.526202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.002517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.020216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.021156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.028984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.500976  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.519154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.519316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.526809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.002882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.019205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.020141  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.026965  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.501036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.518337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.519991  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.525486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.018947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.019127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.025725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.501581  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.518560  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.520017  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.525518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.001825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.018331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.020369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.026403  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.501127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.519632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.520978  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.525884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.002361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.027021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.525535  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.002688  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.017322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.019838  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.025328  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.501474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.517324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.519128  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.525804  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.001640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.017615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.019699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.025407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.501333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.518228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.520320  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.526401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.001257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.017769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.019813  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.501852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.517912  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.519457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.525502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.001036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.018891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.019341  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.501891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.517945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.519845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.525477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.018364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.019047  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.025949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.501632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.517753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.519551  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.525075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.019109  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.021003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.025940  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.503032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.518866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.520801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.525566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.002115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.017835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.020583  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.026191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.502465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.517620  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.520272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.000870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.018932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.019718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.025748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.502491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.525784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.001520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.020061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.026348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.519550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.519863  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.526033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.001475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.018365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.019331  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.026308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.502572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.517421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.520211  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.525925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.001941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.019309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.020493  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.027497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.501822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.520262  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.526454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.003835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.019771  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.020317  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.025953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.501469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.517769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.519531  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.526394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.001467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.018767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.018975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.025574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.501327  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.517147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.519793  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.525870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.001711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.019756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.022733  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.025432  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.501110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.520152  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.526331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.001665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.018212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.020818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.027301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.502145  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.518137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.520139  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.002613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.018231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.019849  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.501054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.518385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.526209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.017824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.020797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.026068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.501618  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.519136  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.519498  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.526198  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.001727  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.019695  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.020007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.026210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.502382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.518209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.520090  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.526008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.002275  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.017575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.020217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.026182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.501858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.518887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.520199  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.525849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.001391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.017528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.019856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.026978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.502108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.517185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.519497  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.526193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.002439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.026369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.502252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.518245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.519830  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.525789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.002157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.017975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.020029  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.026100  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.504825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.517735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.522486  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.528548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.005615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.019305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.021640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.027410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.501443  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.519328  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.519829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.526094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.001398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.019374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.020621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.024951  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.501419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.517860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.519006  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.525945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.017274  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.019058  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.501980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.517824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.519466  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.001604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.019698  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.501302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.517854  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.519844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.526844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.017746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.020114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.519009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.520308  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.525824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.001176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.017056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.019336  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.026011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.502015  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.518868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.519785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.525794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.002253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.017282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.020639  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.026305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.518058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.519766  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.525982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.018418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.021050  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.026140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.502619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.517497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.519971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.526180  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.002367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.018215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.020881  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.025867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.502163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.518906  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.519560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.525238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.002160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.018131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.019720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.026035  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.501498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.517861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.520038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.525911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.008043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.108599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.108605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.108940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.501986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.519116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.519363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.526237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.002941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.019968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.026086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.501165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.518371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.519716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.526191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.003221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.017756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.020569  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.025532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.502303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.517833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.520043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.526299  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.001963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.019603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.020175  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.026074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.518548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.526362  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.001337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.017680  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.020642  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.025160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.501481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.519187  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.519354  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.526002  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.001164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.017266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.020018  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.025815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.501835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.518458  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.519449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.526988  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.001942  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.017559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.019230  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.027617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.501568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.518953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.519722  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.000827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.017696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.019798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.501984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.519229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.525931  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.002067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.018520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.020314  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.026702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.501478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.518992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.519109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.525577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.001061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.019049  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.019914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.025870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.501375  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.517502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.520013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.525860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.002219  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.018451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.019784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.025779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.503078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.519485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.528833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.001789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.017702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.019708  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.025298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.517966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.520785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.526958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.017726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.019345  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.026841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.501551  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.520217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.526558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.001536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.018736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.020611  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.025440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.519745  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.526510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.002283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.017864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.019800  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.025916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.502006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.519062  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.519655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.005447  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.017234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.019831  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.026557  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.501996  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.519856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.520083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.525230  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.002748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.019533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.025957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.502580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.519968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.525850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.019152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.019529  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.025144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.503036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.518401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.520738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.525739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.001970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.026543  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.505234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.517615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.520770  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.525690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.018177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.019004  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.025710  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.502094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.519521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.520380  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.526127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.002068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.020224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.021127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.025520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.501694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.518963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.520765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.525058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.007417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.024784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.025732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.520851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.521975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.528716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.002656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.019037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.501702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.521095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.526859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.002583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.020457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.025456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.502095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.518464  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.522059  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.526260  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.003337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.017841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.021116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.025850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.518762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.520412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.527410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.001848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.018927  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.019525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.501555  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.518984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.519924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.526028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.002318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.018839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.021112  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.025766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.501254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.518654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.520701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.525608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.001830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.017870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.020014  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.026744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.501677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.519613  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.519874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.526220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.002947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.019118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.020560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.025161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.501842  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.519678  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.003014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.018826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.020409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.026088  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.501916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.518127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.520850  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.525382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.001229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.017453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.019095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.026360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.502510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.517380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.518702  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.001216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.018086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.020349  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.026668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.502075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.518995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.519726  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.526262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.011176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.018083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.022218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.026390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.518961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.519981  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.525961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.002956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.018416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.020053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.026871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.503382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.518628  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.520030  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.526081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.004511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.017733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.019809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.026157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.502455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.517764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.519007  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.525748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.002201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.018354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.020561  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.024986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.501676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.518080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.520259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.526231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.002290  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.017246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.019747  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.025424  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.502256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.519181  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.519361  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.526313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.001733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.017924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.019432  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.024916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.501788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.518994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.520329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.526158  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.002306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.017816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.020329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.026122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.502214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.517689  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.519368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.526566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.001344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.018348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.021395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.026118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.502411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.519218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.519487  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.526004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.002233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.017415  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.020521  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.026057  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.518860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.520188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.526090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.002091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.018506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.019711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.025910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.502421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.518400  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.521296  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.527921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.003104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.018378  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.020878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.026161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.518686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.520170  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.525923  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.004390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.019175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.022158  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.026467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.504086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.520367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.520550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.525380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.002978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.103477  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.103494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.104185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.502233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.519809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.000496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.018444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.019039  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.026510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.502226  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.517482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.520689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.525876  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.001596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.021682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.025805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.517889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.520740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.526273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.001808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.018410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.020658  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.025282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.502482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.517540  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.520502  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.525363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.002384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.018017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.020110  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.026034  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.505672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.520527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.523748  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.529163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.002861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.017744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.019716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.025934  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.503141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.517174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.519166  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.001342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.017719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.020032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.026547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.501789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.519072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.519782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.525316  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.002325  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.017470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.021020  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.026334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.504006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.518610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.525227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.003295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.018224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.023940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.028747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.507809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.522785  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.523541  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.527593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.006856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.021835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.023449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.029978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.506277  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.523013  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.524326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.531084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.006985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.018665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.023247  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.026006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.503056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.519576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.522065  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.003139  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.020881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.022886  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.028847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.502733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.521726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.530711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.532556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.002638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.021902  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.026061  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.027811  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.501943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.518059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.520358  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.527803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.001212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.022110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.023066  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.027074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.511753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.522407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.525249  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.528427  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.003779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.019398  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.020765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.025087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.502271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.519021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.520012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.028122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.028948  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.029097  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.503552  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.519454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.526099  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.528549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.002150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.018589  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.020579  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:50:00.026070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.503019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.518818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.521298  535088 kapi.go:107] duration metric: took 4m27.50578325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:50:00.526236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.004597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.017417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.026007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.503117  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.526118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.002140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.017309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.026874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.517206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.526479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.002066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.018800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.026667  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.501870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.526907  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.001943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.018110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.026258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.503167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.518066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.526754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.007821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.017748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.025450  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.501643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.518495  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.001380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.017918  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.026946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.502671  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.518784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.526820  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.001754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.019448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.502164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.517678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.526283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.002858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.019273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.027420  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.501670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.518047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.526214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.001840  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.018206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.027687  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.501188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.526417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.001069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.018157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.026212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.502289  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.518055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.526968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.001635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.017991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.025970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.506621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.517412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.001701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.018119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.025969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.502625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.517475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.526044  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.002186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.018439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.026091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.519505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.525838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.018285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.027576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.501280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.517529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.526733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.002377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.018228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.502885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.517651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.527123  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.001756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.018508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.026298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.503500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.526229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.005499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.105592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.105644  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.501723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.518760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.525930  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.009252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.020798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.026084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.502008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.518188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.526054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.001524  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.017526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.026186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.501501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.517658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.526525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.001537  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.017379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.027037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.501883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.518635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.525619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.001489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.026672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.501586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.517885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.526477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.000991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.019224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.027309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.502253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.526007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.002357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.017858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.027027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.500869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.517747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.526047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.002561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.018227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.027043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.502430  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.518125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.526108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.002567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.017833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.025933  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.517859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.526354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.000814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.017887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.026568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.502946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.518678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.526480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.001266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.017216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.026609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.501961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.526911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.002183  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.017072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.026509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.503467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.525800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.001730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.018081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.026318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.503000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.518477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.525663  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.001609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.018380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.027170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.502338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.518067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.526337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.001716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.019042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.026553  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.502516  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.517742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.526076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.003220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.017115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.026003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.503084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.520638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.525815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.002310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.017855  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.026358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.501484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.518215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.527345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.001194  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.018531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.026371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.518822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.526665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.000987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.018881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.026261  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.503065  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.519434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.002048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.019887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.026789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.502205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.527132  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.027636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.502137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.518679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.526770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.002674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.018502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.025131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.502841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.518479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.525394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.003210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.017479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.026633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.501409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.517624  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.525765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.001504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.017795  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.026635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.504580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.518573  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.526384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.000864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.018489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.025191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.501782  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.518173  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.526463  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.000518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.017873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.027131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.502017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.518539  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.526000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.002999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.018398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.027329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.501816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.518023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.526878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.002714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.018483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.026808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.502514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.517486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.525494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.000916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.017682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.026270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.504311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.517633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.529587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.005819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.019419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:46.028247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.501836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.603570  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.604017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.002957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.020722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.103677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:47.504417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.529109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.535255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.027116  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.027384  535088 kapi.go:107] duration metric: took 5m10.029733807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:50:48.029168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.029460  535088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-994396 cluster.
	I1101 08:50:48.030850  535088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:50:48.032437  535088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:50:48.524544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.531119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.018726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.026282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.518154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.526614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.018751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.026031  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.518756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.018153  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.518286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.017371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.027754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.518074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.526416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.018974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.026602  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.518144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.526654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.018625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.026704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.517492  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.525999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.019257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.027958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.518075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.526142  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.018092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.025605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.518596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.525863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.017562  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.025851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.518709  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.526387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.025978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.517643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.525642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.018664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.025863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.517006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.527349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.020576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.029108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.518333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.527511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.018504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.027157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.518405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.526704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.018500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.026694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.517768  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.526967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.018243  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.026700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.526719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.017510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.025944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.517662  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.526213  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.019140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.522889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.526826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.017784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.026272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.517992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.527109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.018586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.026175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.518974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.526376  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.018995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.026615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.517947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.526011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.018511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.025631  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.518218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.526593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.018682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.026784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.527301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.018993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.518483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.526408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.018208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.027483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.518108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.528506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.018723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.026036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.519547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.525883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.017886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.026485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.518428  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.526099  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.018816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.028223  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.517235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.019497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.026823  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.518374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.526536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.019643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.026636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.519221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.527357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.018310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.027561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.517385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.526970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.018802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.026280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.527610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.017707  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.028465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.518519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.526293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:21.026625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:21.030779  535088 kapi.go:107] duration metric: took 5m45.508455317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:51:21.518734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.018071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.517851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.022943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.518235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.018970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.517611  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.019971  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.519134  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.018419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.518767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.018701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.519283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.019085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.518032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.019182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.519048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.018264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.018124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.021956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.519959  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:33.014506  535088 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1101 08:51:33.014547  535088 kapi.go:107] duration metric: took 6m0.000528296s to wait for kubernetes.io/minikube-addons=registry ...
	W1101 08:51:33.014668  535088 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1101 08:51:33.016548  535088 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:51:33.017988  535088 addons.go:515] duration metric: took 6m9.594756816s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1101 08:51:33.018036  535088 start.go:247] waiting for cluster config update ...
	I1101 08:51:33.018057  535088 start.go:256] writing updated cluster config ...
	I1101 08:51:33.018363  535088 ssh_runner.go:195] Run: rm -f paused
	I1101 08:51:33.027702  535088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:33.035072  535088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.039692  535088 pod_ready.go:94] pod "coredns-66bc5c9577-2rqh8" is "Ready"
	I1101 08:51:33.039727  535088 pod_ready.go:86] duration metric: took 4.614622ms for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.041954  535088 pod_ready.go:83] waiting for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.046075  535088 pod_ready.go:94] pod "etcd-addons-994396" is "Ready"
	I1101 08:51:33.046103  535088 pod_ready.go:86] duration metric: took 4.126087ms for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.048189  535088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.052772  535088 pod_ready.go:94] pod "kube-apiserver-addons-994396" is "Ready"
	I1101 08:51:33.052802  535088 pod_ready.go:86] duration metric: took 4.587761ms for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.055446  535088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.433771  535088 pod_ready.go:94] pod "kube-controller-manager-addons-994396" is "Ready"
	I1101 08:51:33.433801  535088 pod_ready.go:86] duration metric: took 378.329685ms for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.634675  535088 pod_ready.go:83] waiting for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.034403  535088 pod_ready.go:94] pod "kube-proxy-fbmdq" is "Ready"
	I1101 08:51:34.034444  535088 pod_ready.go:86] duration metric: took 399.738812ms for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.233978  535088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633095  535088 pod_ready.go:94] pod "kube-scheduler-addons-994396" is "Ready"
	I1101 08:51:34.633131  535088 pod_ready.go:86] duration metric: took 399.109096ms for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633149  535088 pod_ready.go:40] duration metric: took 1.605381934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:34.682753  535088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:51:34.684612  535088 out.go:179] * Done! kubectl is now configured to use "addons-994396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.409518396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987525409489228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ba711c5-b463-4f4b-94cb-69dec015b0b4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.410377754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a7bff80-3075-4ba4-b077-16d029bd8aa8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.410520975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a7bff80-3075-4ba4-b077-16d029bd8aa8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.411578267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a7bff80-3075-4ba4-b077-16d029bd8aa8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.455377418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3d8a857-3d7d-4435-b742-ef420e1c05f5 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.455839053Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3d8a857-3d7d-4435-b742-ef420e1c05f5 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.459566458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96809743-01bc-4d56-bc54-a3d4c60c6b9a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.460747931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987525460667632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96809743-01bc-4d56-bc54-a3d4c60c6b9a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.461839165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=904f9101-e6e5-4603-a651-a11cad665482 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.461989536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=904f9101-e6e5-4603-a651-a11cad665482 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.462513605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=904f9101-e6e5-4603-a651-a11cad665482 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.502007493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0749c63b-d944-4f71-b183-f97048118eac name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.502079630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0749c63b-d944-4f71-b183-f97048118eac name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.504146125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc012f0e-01f6-47f4-be88-a0b2630a3d0e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.505473668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987525505444372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc012f0e-01f6-47f4-be88-a0b2630a3d0e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.506518797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d54bda6b-abe8-4553-893b-c4e1d76bad40 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.506634942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d54bda6b-abe8-4553-893b-c4e1d76bad40 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.507719304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d54bda6b-abe8-4553-893b-c4e1d76bad40 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.551233615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9762b2bb-fc1d-4bc9-a24d-51860e6d0f4b name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.551342024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9762b2bb-fc1d-4bc9-a24d-51860e6d0f4b name=/runtime.v1.RuntimeService/Version
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.552594300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f92cf7e5-3d26-43db-9322-01555406ff36 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.553748579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987525553719458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f92cf7e5-3d26-43db-9322-01555406ff36 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.554381569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6da7be39-fe56-41f4-b2e9-9cdda0578085 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.554463022Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6da7be39-fe56-41f4-b2e9-9cdda0578085 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:58:45 addons-994396 crio[817]: time="2025-11-01 08:58:45.555188827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6da7be39-fe56-41f4-b2e9-9cdda0578085 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9aac7eb346903       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   cdbcecc3e9d43       busybox
	8c914a21ca5c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	437ef3bce50ac       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	f73cee1644b03       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             8 minutes ago       Running             controller                               0                   147663b03fe63       ingress-nginx-controller-675c5ddd98-9cxnd
	862808e2ff30f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            8 minutes ago       Running             liveness-probe                           0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	a4eac7bee2514       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	89e19f39781eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                9 minutes ago       Running             node-driver-registrar                    0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	68bf99b640c16       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   c090988aa5e05       csi-hostpath-resizer-0
	39137378c3801       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   6eaf5e212ad1c       csi-hostpath-attacher-0
	80b7ac026d755       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   5ef1abbd77f24       snapshot-controller-7d9fbc56b8-jbkmr
	a63011b6ec66f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      11 minutes ago      Running             volume-snapshot-controller               0                   eeeab7772fb0e       snapshot-controller-7d9fbc56b8-2pbx5
	6e0352b147e8a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   11 minutes ago      Running             csi-external-health-monitor-controller   0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	7fbb154c5ba00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   11 minutes ago      Exited              patch                                    0                   058d4f2c90db7       ingress-nginx-admission-patch-dmt9r
	5e6c68a57ee53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   11 minutes ago      Exited              create                                   0                   a5dfb28615faf       ingress-nginx-admission-create-6ptqs
	6d2226436f827       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              11 minutes ago      Running             registry-proxy                           0                   c449271f0824b       registry-proxy-bzs78
	dda41d22ea7ff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            11 minutes ago      Running             gadget                                   0                   e07af8e7a3eca       gadget-z8nnd
	9b56bd6c195bd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             11 minutes ago      Running             local-path-provisioner                   0                   6d69749ca9bc7       local-path-provisioner-648f6765c9-9ghvj
	7b4c1be283a7f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               12 minutes ago      Running             minikube-ingress-dns                     0                   9f7ac0dd48cc1       kube-ingress-dns-minikube
	2ad7748982f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             13 minutes ago      Running             storage-provisioner                      0                   ca1dd787f338a       storage-provisioner
	9bb5f4d4e768d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     13 minutes ago      Running             amd-gpu-device-plugin                    0                   fec37181f6706       amd-gpu-device-plugin-vssmp
	9d0ff7b8e8784       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             13 minutes ago      Running             coredns                                  0                   d62d15d11c495       coredns-66bc5c9577-2rqh8
	9d0a2f86b38f4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             13 minutes ago      Running             kube-proxy                               0                   e1fb2fcb1123b       kube-proxy-fbmdq
	80489befa62b8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             13 minutes ago      Running             kube-scheduler                           0                   d4cfa30f1a32a       kube-scheduler-addons-994396
	844d913e662bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             13 minutes ago      Running             etcd                                     0                   e2f739ab181cd       etcd-addons-994396
	35bb45a49c1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             13 minutes ago      Running             kube-controller-manager                  0                   80615bf9878bb       kube-controller-manager-addons-994396
	fdeec4098b47d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             13 minutes ago      Running             kube-apiserver                           0                   f1c88f09470e5       kube-apiserver-addons-994396
	
	
	==> coredns [9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1] <==
	[INFO] 10.244.0.8:36722 - 49777 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000366795s
	[INFO] 10.244.0.8:38018 - 40061 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00017285s
	[INFO] 10.244.0.8:38018 - 63435 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000273392s
	[INFO] 10.244.0.8:38018 - 24030 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00009464s
	[INFO] 10.244.0.8:38018 - 24985 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000354364s
	[INFO] 10.244.0.8:38018 - 49456 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00011686s
	[INFO] 10.244.0.8:38018 - 49411 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000369783s
	[INFO] 10.244.0.8:38018 - 40178 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000079712s
	[INFO] 10.244.0.8:38018 - 18938 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000314513s
	[INFO] 10.244.0.8:33501 - 43721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000155257s
	[INFO] 10.244.0.8:33501 - 56739 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000200681s
	[INFO] 10.244.0.8:33501 - 25119 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079174s
	[INFO] 10.244.0.8:33501 - 38493 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001220593s
	[INFO] 10.244.0.8:33501 - 39673 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000133695s
	[INFO] 10.244.0.8:33501 - 49497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000244083s
	[INFO] 10.244.0.8:33501 - 15742 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000102764s
	[INFO] 10.244.0.8:33501 - 904 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001069569s
	[INFO] 10.244.0.8:55843 - 9376 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000150846s
	[INFO] 10.244.0.8:55843 - 38007 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000074252s
	[INFO] 10.244.0.8:55843 - 21911 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000179918s
	[INFO] 10.244.0.8:55843 - 49125 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000806217s
	[INFO] 10.244.0.8:55843 - 17405 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000083321s
	[INFO] 10.244.0.8:55843 - 31903 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000101101s
	[INFO] 10.244.0.8:55843 - 12201 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000147347s
	[INFO] 10.244.0.8:55843 - 56516 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126947s
	
	
	==> describe nodes <==
	Name:               addons-994396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-994396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-994396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-994396
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-994396"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:45:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-994396
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:58:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-994396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 47158355a9594cbf84ea23a10000597a
	  System UUID:                47158355-a959-4cbf-84ea-23a10000597a
	  Boot ID:                    8b22796c-545f-4b51-954a-eb39441cd160
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gadget                      gadget-z8nnd                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9cxnd                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 amd-gpu-device-plugin-vssmp                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-2rqh8                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-7l7ps                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-994396                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-994396                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-994396                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-fbmdq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-994396                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-6b586f9694-b4ph6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 registry-proxy-bzs78                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-2pbx5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-7d9fbc56b8-jbkmr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  local-path-storage          local-path-provisioner-648f6765c9-9ghvj                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-994396 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-994396 event: Registered Node addons-994396 in Controller
	
	
	==> dmesg <==
	[Nov 1 08:46] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 08:47] kauditd_printk_skb: 32 callbacks suppressed
	[ +34.333332] kauditd_printk_skb: 101 callbacks suppressed
	[  +3.822306] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.002792] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 1 08:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.240953] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 08:50] kauditd_printk_skb: 17 callbacks suppressed
	[ +34.452421] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:51] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.931610] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 08:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.008516] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.922747] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.151130] kauditd_printk_skb: 37 callbacks suppressed
	[ +11.857033] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 08:54] kauditd_printk_skb: 26 callbacks suppressed
	[ +40.501255] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:55] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 08:56] kauditd_printk_skb: 45 callbacks suppressed
	[Nov 1 08:57] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde] <==
	{"level":"info","ts":"2025-11-01T08:47:54.978149Z","caller":"traceutil/trace.go:172","msg":"trace[879398792] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1248; }","duration":"128.792993ms","start":"2025-11-01T08:47:54.849340Z","end":"2025-11-01T08:47:54.978133Z","steps":["trace[879398792] 'read index received'  (duration: 128.787273ms)","trace[879398792] 'applied index is now lower than readState.Index'  (duration: 4.859µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:47:54.978274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.918573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:47:54.978294Z","caller":"traceutil/trace.go:172","msg":"trace[478888116] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1194; }","duration":"128.951874ms","start":"2025-11-01T08:47:54.849337Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[478888116] 'agreement among raft nodes before linearized reading'  (duration: 128.896473ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:54.978301Z","caller":"traceutil/trace.go:172","msg":"trace[127276739] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"193.938157ms","start":"2025-11-01T08:47:54.784350Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[127276739] 'process raft request'  (duration: 193.811655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:03.807211Z","caller":"traceutil/trace.go:172","msg":"trace[306428088] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"143.076836ms","start":"2025-11-01T08:50:03.664107Z","end":"2025-11-01T08:50:03.807184Z","steps":["trace[306428088] 'process raft request'  (duration: 142.860459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:30.399983Z","caller":"traceutil/trace.go:172","msg":"trace[417490432] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"105.005558ms","start":"2025-11-01T08:50:30.294965Z","end":"2025-11-01T08:50:30.399970Z","steps":["trace[417490432] 'process raft request'  (duration: 104.840267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785305Z","caller":"traceutil/trace.go:172","msg":"trace[446064097] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1675; }","duration":"202.139299ms","start":"2025-11-01T08:51:25.583130Z","end":"2025-11-01T08:51:25.785270Z","steps":["trace[446064097] 'read index received'  (duration: 202.133895ms)","trace[446064097] 'applied index is now lower than readState.Index'  (duration: 4.594µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:51:25.785474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.320618ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:51:25.785498Z","caller":"traceutil/trace.go:172","msg":"trace[2127751376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1576; }","duration":"202.392505ms","start":"2025-11-01T08:51:25.583101Z","end":"2025-11-01T08:51:25.785493Z","steps":["trace[2127751376] 'agreement among raft nodes before linearized reading'  (duration: 202.298341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785518Z","caller":"traceutil/trace.go:172","msg":"trace[25251410] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"230.552599ms","start":"2025-11-01T08:51:25.554955Z","end":"2025-11-01T08:51:25.785507Z","steps":["trace[25251410] 'process raft request'  (duration: 230.448007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027453Z","caller":"traceutil/trace.go:172","msg":"trace[1612683542] linearizableReadLoop","detail":"{readStateIndex:1872; appliedIndex:1872; }","duration":"169.871386ms","start":"2025-11-01T08:52:17.857553Z","end":"2025-11-01T08:52:18.027424Z","steps":["trace[1612683542] 'read index received'  (duration: 169.865757ms)","trace[1612683542] 'applied index is now lower than readState.Index'  (duration: 4.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:18.027601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.004057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:18.027618Z","caller":"traceutil/trace.go:172","msg":"trace[354966435] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1760; }","duration":"170.064613ms","start":"2025-11-01T08:52:17.857549Z","end":"2025-11-01T08:52:18.027613Z","steps":["trace[354966435] 'agreement among raft nodes before linearized reading'  (duration: 169.976661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027617Z","caller":"traceutil/trace.go:172","msg":"trace[182557049] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1761; }","duration":"175.595316ms","start":"2025-11-01T08:52:17.852012Z","end":"2025-11-01T08:52:18.027607Z","steps":["trace[182557049] 'process raft request'  (duration: 175.503416ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.484737Z","caller":"traceutil/trace.go:172","msg":"trace[1326759402] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1904; }","duration":"340.503004ms","start":"2025-11-01T08:52:23.144214Z","end":"2025-11-01T08:52:23.484717Z","steps":["trace[1326759402] 'read index received'  (duration: 340.496208ms)","trace[1326759402] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:23.485008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.771395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2025-11-01T08:52:23.485058Z","caller":"traceutil/trace.go:172","msg":"trace[1039449345] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1790; }","duration":"340.841883ms","start":"2025-11-01T08:52:23.144209Z","end":"2025-11-01T08:52:23.485051Z","steps":["trace[1039449345] 'agreement among raft nodes before linearized reading'  (duration: 340.62868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485106Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.144193Z","time spent":"340.902265ms","remote":"127.0.0.1:36552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T08:52:23.485553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.574901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:23.485588Z","caller":"traceutil/trace.go:172","msg":"trace[1585287071] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1791; }","duration":"287.617514ms","start":"2025-11-01T08:52:23.197963Z","end":"2025-11-01T08:52:23.485581Z","steps":["trace[1585287071] 'agreement among raft nodes before linearized reading'  (duration: 287.549253ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.485660Z","caller":"traceutil/trace.go:172","msg":"trace[1103263823] transaction","detail":"{read_only:false; response_revision:1791; number_of_response:1; }","duration":"361.459988ms","start":"2025-11-01T08:52:23.124191Z","end":"2025-11-01T08:52:23.485651Z","steps":["trace[1103263823] 'process raft request'  (duration: 361.180443ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485795Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.124175Z","time spent":"361.507625ms","remote":"127.0.0.1:36760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-11-01T08:55:13.580313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1434}
	{"level":"info","ts":"2025-11-01T08:55:13.648379Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1434,"took":"67.304726ms","hash":2547452093,"current-db-size-bytes":5730304,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-11-01T08:55:13.648498Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2547452093,"revision":1434,"compact-revision":-1}
	
	
	==> kernel <==
	 08:58:45 up 14 min,  0 users,  load average: 0.15, 0.39, 0.38
	Linux addons-994396 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25] <==
	W1101 08:46:31.751759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.751828       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:46:31.751848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:46:31.752853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.752966       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:46:31.753020       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:48:03.292013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	W1101 08:48:03.296407       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:48:03.296747       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:48:03.297742       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	E1101 08:48:03.298496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	I1101 08:48:03.353240       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:52:03.525330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42910: use of closed network connection
	E1101 08:52:03.723785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42940: use of closed network connection
	I1101 08:52:12.984624       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.226.149"}
	I1101 08:53:04.341444       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 08:55:15.302985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 08:56:08.891135       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:56:09.140799       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.237.168"}
	
	
	==> kube-controller-manager [35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8] <==
	E1101 08:46:22.433268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:22.496038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:46:52.438789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:52.504482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:22.446493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:22.515370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:52.452536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:52.535721       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:52:17.008825       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 08:52:35.860282       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E1101 08:54:57.714310       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.738576       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.766801       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.805443       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.865423       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.962606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.138236       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.477214       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:59.131849       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:00.430311       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:03.008821       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:08.147281       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:18.405556       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:27.269224       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I1101 08:56:13.507559       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9] <==
	I1101 08:45:24.962819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:45:25.066839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:45:25.068064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1101 08:45:25.073313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:45:25.410848       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:45:25.410962       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:45:25.410991       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:45:25.477946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:45:25.478244       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:45:25.478277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:45:25.484125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:45:25.484405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:45:25.491275       1 config.go:200] "Starting service config controller"
	I1101 08:45:25.491309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:45:25.494813       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:45:25.496161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:45:25.495379       1 config.go:309] "Starting node config controller"
	I1101 08:45:25.506423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:45:25.506433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:45:25.584706       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:45:25.592170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:45:25.598016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5] <==
	E1101 08:45:15.349464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:15.349542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:15.349728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:45:15.349881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:45:15.352076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:15.352119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:15.352139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:15.352358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:45:15.352409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:15.357367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:15.357513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:15.357652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:45:16.203110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:45:16.263373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:16.299073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:16.424658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:16.486112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:16.556670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:16.568573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:16.598275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:45:16.651957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:16.662617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:45:16.674245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:16.759792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:45:19.143863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:57:48 addons-994396 kubelet[1497]: E1101 08:57:48.407519    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987468407054671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:48 addons-994396 kubelet[1497]: E1101 08:57:48.407547    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987468407054671  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:53 addons-994396 kubelet[1497]: I1101 08:57:53.969689    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bzs78" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:57:58 addons-994396 kubelet[1497]: E1101 08:57:58.410365    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987478409873164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:58 addons-994396 kubelet[1497]: E1101 08:57:58.410396    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987478409873164  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:08 addons-994396 kubelet[1497]: E1101 08:58:08.413554    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987488413141105  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:08 addons-994396 kubelet[1497]: E1101 08:58:08.413603    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987488413141105  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006157    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006251    1497 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006543    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(8623da74-791e-4fd6-a974-60ebca5738a7): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:58:11 addons-994396 kubelet[1497]: E1101 08:58:11.006593    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:58:18 addons-994396 kubelet[1497]: E1101 08:58:18.416995    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987498416500787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:18 addons-994396 kubelet[1497]: E1101 08:58:18.417042    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987498416500787  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:20 addons-994396 kubelet[1497]: I1101 08:58:20.969340    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vssmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:58:24 addons-994396 kubelet[1497]: E1101 08:58:24.971509    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:58:28 addons-994396 kubelet[1497]: E1101 08:58:28.420006    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987508419533730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:28 addons-994396 kubelet[1497]: E1101 08:58:28.420050    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987508419533730  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:37 addons-994396 kubelet[1497]: I1101 08:58:37.971326    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:58:38 addons-994396 kubelet[1497]: E1101 08:58:38.423404    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987518422709627  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:38 addons-994396 kubelet[1497]: E1101 08:58:38.423432    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987518422709627  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:58:39 addons-994396 kubelet[1497]: E1101 08:58:39.970457    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:58:41 addons-994396 kubelet[1497]: E1101 08:58:41.102645    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
	Nov 01 08:58:41 addons-994396 kubelet[1497]: E1101 08:58:41.102699    1497 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e"
	Nov 01 08:58:41 addons-994396 kubelet[1497]: E1101 08:58:41.103649    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container registry start failed in pod registry-6b586f9694-b4ph6_kube-system(f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c): ErrImagePull: reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:58:41 addons-994396 kubelet[1497]: E1101 08:58:41.103714    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ErrImagePull: \"reading manifest sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-b4ph6" podUID="f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c"
	
	
	==> storage-provisioner [2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3] <==
	W1101 08:58:21.569973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:23.573339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:23.583535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:25.586430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:25.595195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:27.599263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:27.606376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:29.610104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:29.615171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:31.619714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:31.625437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:33.628790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:33.635258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:35.639864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:35.645698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:37.649119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:37.654832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:39.658137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:39.663590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:41.667464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:41.673197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:43.677080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:43.683171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:45.687080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:45.696355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
helpers_test.go:269: (dbg) Run:  kubectl --context addons-994396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1 (94.325163ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:56:09 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlw58 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rlw58:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m37s                default-scheduler  Successfully assigned default/nginx to addons-994396
	  Warning  Failed     80s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     80s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    79s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     79s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x2 over 2m37s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mngk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-994396
	  Warning  Failed     5m20s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    114s (x3 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     35s (x3 over 5m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     35s (x2 over 2m20s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x4 over 5m20s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     7s (x4 over 5m20s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65r97 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-65r97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6ptqs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dmt9r" not found
	Error from server (NotFound): pods "registry-6b586f9694-b4ph6" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.098412756s)
--- FAIL: TestAddons/parallel/CSI (377.14s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (303.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-994396 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-994396 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-994396 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.731µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-994396 -n addons-994396
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 logs -n 25: (1.498288775s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ -p binary-mirror-775538                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ start   │ -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:51 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ enable headlamp -p addons-994396 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:54 UTC │ 01 Nov 25 08:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:38.415244  535088 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:38.415511  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415520  535088 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:38.415525  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415722  535088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:38.416292  535088 out.go:368] Setting JSON to false
	I1101 08:44:38.417206  535088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62800,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:38.417275  535088 start.go:143] virtualization: kvm guest
	I1101 08:44:38.419180  535088 out.go:179] * [addons-994396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:38.420576  535088 notify.go:221] Checking for updates...
	I1101 08:44:38.420602  535088 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:44:38.422388  535088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:38.423762  535088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:38.425054  535088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:38.426433  535088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:44:38.427613  535088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:44:38.429086  535088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:38.459669  535088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:44:38.460716  535088 start.go:309] selected driver: kvm2
	I1101 08:44:38.460736  535088 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:38.460750  535088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:44:38.461509  535088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:38.461750  535088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:44:38.461788  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:44:38.461839  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:38.461847  535088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:38.461887  535088 start.go:353] cluster config:
	{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:44:38.462012  535088 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:38.463350  535088 out.go:179] * Starting "addons-994396" primary control-plane node in "addons-994396" cluster
	I1101 08:44:38.464523  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:44:38.464559  535088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:38.464570  535088 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:38.464648  535088 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:44:38.464659  535088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:44:38.464982  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:38.465015  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json: {Name:mk89a75531523cc17e10cf65ac144e466baef6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:44:38.465175  535088 start.go:360] acquireMachinesLock for addons-994396: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:44:38.465227  535088 start.go:364] duration metric: took 38.791µs to acquireMachinesLock for "addons-994396"
	I1101 08:44:38.465244  535088 start.go:93] Provisioning new machine with config: &{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:44:38.465309  535088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:44:38.467651  535088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:44:38.467824  535088 start.go:159] libmachine.API.Create for "addons-994396" (driver="kvm2")
	I1101 08:44:38.467852  535088 client.go:173] LocalClient.Create starting
	I1101 08:44:38.467960  535088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 08:44:38.525135  535088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 08:44:38.966403  535088 main.go:143] libmachine: creating domain...
	I1101 08:44:38.966427  535088 main.go:143] libmachine: creating network...
	I1101 08:44:38.968049  535088 main.go:143] libmachine: found existing default network
	I1101 08:44:38.968268  535088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.968754  535088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9b7d0}
	I1101 08:44:38.968919  535088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-994396</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.974811  535088 main.go:143] libmachine: creating private network mk-addons-994396 192.168.39.0/24...
	I1101 08:44:39.051181  535088 main.go:143] libmachine: private network mk-addons-994396 192.168.39.0/24 created
	I1101 08:44:39.051459  535088 main.go:143] libmachine: <network>
	  <name>mk-addons-994396</name>
	  <uuid>960ab3a9-e2ba-413f-8b77-ff4745b036d0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3e:a3:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:39.051486  535088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.051511  535088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:39.051536  535088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.051601  535088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:44:39.334278  535088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa...
	I1101 08:44:39.562590  535088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk...
	I1101 08:44:39.562642  535088 main.go:143] libmachine: Writing magic tar header
	I1101 08:44:39.562674  535088 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:44:39.562773  535088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.562837  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396
	I1101 08:44:39.562920  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 (perms=drwx------)
	I1101 08:44:39.562944  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 08:44:39.562958  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:44:39.562977  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.562988  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 08:44:39.562999  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 08:44:39.563010  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 08:44:39.563022  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:44:39.563032  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:44:39.563043  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:44:39.563053  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:44:39.563063  535088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:44:39.563072  535088 main.go:143] libmachine: skipping /home - not owner
	I1101 08:44:39.563079  535088 main.go:143] libmachine: defining domain...
	I1101 08:44:39.564528  535088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:39.569846  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:73:0a:92 in network default
	I1101 08:44:39.570479  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:39.570497  535088 main.go:143] libmachine: starting domain...
	I1101 08:44:39.570501  535088 main.go:143] libmachine: ensuring networks are active...
	I1101 08:44:39.571361  535088 main.go:143] libmachine: Ensuring network default is active
	I1101 08:44:39.571760  535088 main.go:143] libmachine: Ensuring network mk-addons-994396 is active
	I1101 08:44:39.572463  535088 main.go:143] libmachine: getting domain XML...
	I1101 08:44:39.574016  535088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <uuid>47158355-a959-4cbf-84ea-23a10000597a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2a:d2:e3'/>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:73:0a:92'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:40.850976  535088 main.go:143] libmachine: waiting for domain to start...
	I1101 08:44:40.852401  535088 main.go:143] libmachine: domain is now running
	I1101 08:44:40.852417  535088 main.go:143] libmachine: waiting for IP...
	I1101 08:44:40.853195  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:40.853985  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:40.853994  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:40.854261  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:40.854309  535088 retry.go:31] will retry after 216.262446ms: waiting for domain to come up
	I1101 08:44:41.071837  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.072843  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.072862  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.073274  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.073319  535088 retry.go:31] will retry after 360.302211ms: waiting for domain to come up
	I1101 08:44:41.434879  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.435804  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.435822  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.436172  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.436214  535088 retry.go:31] will retry after 371.777554ms: waiting for domain to come up
	I1101 08:44:41.809947  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.810703  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.810722  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.811072  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.811112  535088 retry.go:31] will retry after 462.843758ms: waiting for domain to come up
	I1101 08:44:42.275984  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.276618  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.276637  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.276993  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.277037  535088 retry.go:31] will retry after 560.265466ms: waiting for domain to come up
	I1101 08:44:42.838931  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.839781  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.839798  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.840224  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.840268  535088 retry.go:31] will retry after 839.411139ms: waiting for domain to come up
	I1101 08:44:43.681040  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:43.681790  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:43.681802  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:43.682192  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:43.682243  535088 retry.go:31] will retry after 1.099878288s: waiting for domain to come up
	I1101 08:44:44.783686  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:44.784502  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:44.784521  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:44.784840  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:44.784888  535088 retry.go:31] will retry after 1.052374717s: waiting for domain to come up
	I1101 08:44:45.839257  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:45.839889  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:45.839926  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:45.840243  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:45.840284  535088 retry.go:31] will retry after 1.704542625s: waiting for domain to come up
	I1101 08:44:47.547411  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:47.548205  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:47.548225  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:47.548588  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:47.548630  535088 retry.go:31] will retry after 1.752267255s: waiting for domain to come up
	I1101 08:44:49.302359  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:49.303199  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:49.303210  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:49.303522  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:49.303559  535088 retry.go:31] will retry after 2.861627149s: waiting for domain to come up
	I1101 08:44:52.168696  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:52.169368  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:52.169385  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:52.169681  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:52.169738  535088 retry.go:31] will retry after 2.277819072s: waiting for domain to come up
	I1101 08:44:54.449193  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:54.449957  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:54.449978  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:54.450273  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:54.450316  535088 retry.go:31] will retry after 3.87405165s: waiting for domain to come up
	I1101 08:44:58.329388  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330073  535088 main.go:143] libmachine: domain addons-994396 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330089  535088 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1101 08:44:58.330096  535088 main.go:143] libmachine: reserving static IP address...
	I1101 08:44:58.330490  535088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-994396", mac: "52:54:00:2a:d2:e3", ip: "192.168.39.195"} in network mk-addons-994396
	I1101 08:44:58.532247  535088 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-994396
	I1101 08:44:58.532270  535088 main.go:143] libmachine: waiting for SSH...
	I1101 08:44:58.532276  535088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:44:58.535646  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536214  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.536242  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536445  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.536737  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.536748  535088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:44:58.655800  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:58.656194  535088 main.go:143] libmachine: domain creation complete
	I1101 08:44:58.657668  535088 machine.go:94] provisionDockerMachine start ...
	I1101 08:44:58.660444  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.660857  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.660881  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.661078  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.661273  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.661283  535088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:44:58.781217  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:44:58.781253  535088 buildroot.go:166] provisioning hostname "addons-994396"
	I1101 08:44:58.784387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784787  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.784821  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784992  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.785186  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.785198  535088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-994396 && echo "addons-994396" | sudo tee /etc/hostname
	I1101 08:44:58.921865  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-994396
	
	I1101 08:44:58.924651  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925106  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.925158  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925363  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.925623  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.925647  535088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-994396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-994396/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-994396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:44:59.053021  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:59.053062  535088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 08:44:59.053121  535088 buildroot.go:174] setting up certificates
	I1101 08:44:59.053134  535088 provision.go:84] configureAuth start
	I1101 08:44:59.056039  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.056491  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.056527  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059390  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059768  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.059793  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059971  535088 provision.go:143] copyHostCerts
	I1101 08:44:59.060039  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 08:44:59.060157  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 08:44:59.060215  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 08:44:59.060262  535088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.addons-994396 san=[127.0.0.1 192.168.39.195 addons-994396 localhost minikube]
	I1101 08:44:59.098818  535088 provision.go:177] copyRemoteCerts
	I1101 08:44:59.098909  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:44:59.101492  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.101853  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.101876  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.102044  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.192919  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:44:59.224374  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:44:59.254587  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:44:59.285112  535088 provision.go:87] duration metric: took 231.963697ms to configureAuth
	I1101 08:44:59.285151  535088 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:44:59.285333  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:44:59.288033  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288440  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.288461  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288660  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.288854  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.288872  535088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:44:59.552498  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:44:59.552535  535088 machine.go:97] duration metric: took 894.848438ms to provisionDockerMachine
	I1101 08:44:59.552551  535088 client.go:176] duration metric: took 21.084691653s to LocalClient.Create
	I1101 08:44:59.552575  535088 start.go:167] duration metric: took 21.084749844s to libmachine.API.Create "addons-994396"
	I1101 08:44:59.552585  535088 start.go:293] postStartSetup for "addons-994396" (driver="kvm2")
	I1101 08:44:59.552598  535088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:44:59.552698  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:44:59.555985  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556410  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.556446  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556594  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.646378  535088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:44:59.651827  535088 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:44:59.651860  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 08:44:59.652002  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 08:44:59.652045  535088 start.go:296] duration metric: took 99.451778ms for postStartSetup
	I1101 08:44:59.655428  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.655951  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.655983  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.656303  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:59.656524  535088 start.go:128] duration metric: took 21.191204758s to createHost
	I1101 08:44:59.659225  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659662  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.659688  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659918  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.660165  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.660179  535088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:44:59.778959  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761986699.744832808
	
	I1101 08:44:59.778992  535088 fix.go:216] guest clock: 1761986699.744832808
	I1101 08:44:59.779003  535088 fix.go:229] Guest: 2025-11-01 08:44:59.744832808 +0000 UTC Remote: 2025-11-01 08:44:59.656538269 +0000 UTC m=+21.291332648 (delta=88.294539ms)
	I1101 08:44:59.779025  535088 fix.go:200] guest clock delta is within tolerance: 88.294539ms
	I1101 08:44:59.779033  535088 start.go:83] releasing machines lock for "addons-994396", held for 21.31379566s
	I1101 08:44:59.782561  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783052  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.783085  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783744  535088 ssh_runner.go:195] Run: cat /version.json
	I1101 08:44:59.783923  535088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:44:59.786949  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787338  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.787364  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787467  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787547  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.788054  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.788100  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.788306  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.898855  535088 ssh_runner.go:195] Run: systemctl --version
	I1101 08:44:59.905749  535088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:45:00.064091  535088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:45:00.072201  535088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:45:00.072263  535088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:45:00.092562  535088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:45:00.092584  535088 start.go:496] detecting cgroup driver to use...
	I1101 08:45:00.092661  535088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:45:00.112010  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:45:00.129164  535088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:45:00.129222  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:45:00.147169  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:45:00.164876  535088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:45:00.317011  535088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:45:00.521291  535088 docker.go:234] disabling docker service ...
	I1101 08:45:00.521377  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:45:00.537927  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:45:00.552544  535088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:45:00.714401  535088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:45:00.855387  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:45:00.871802  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:45:00.895848  535088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:45:00.895969  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.908735  535088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:45:00.908831  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.924244  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.938467  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.951396  535088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:45:00.965054  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.977595  535088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.998868  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:01.011547  535088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:45:01.022709  535088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:45:01.022775  535088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:45:01.044963  535088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:45:01.057499  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:01.203336  535088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:45:01.311792  535088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:45:01.311884  535088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:45:01.317453  535088 start.go:564] Will wait 60s for crictl version
	I1101 08:45:01.317538  535088 ssh_runner.go:195] Run: which crictl
	I1101 08:45:01.321986  535088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:45:01.367266  535088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:45:01.367363  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.398127  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.431424  535088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:45:01.435939  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436441  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:01.436471  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436732  535088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:45:01.441662  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:01.457635  535088 kubeadm.go:884] updating cluster {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:45:01.457753  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:45:01.457802  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:01.495090  535088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:45:01.495193  535088 ssh_runner.go:195] Run: which lz4
	I1101 08:45:01.500348  535088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:45:01.506036  535088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:45:01.506082  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:45:03.083875  535088 crio.go:462] duration metric: took 1.583585669s to copy over tarball
	I1101 08:45:03.084036  535088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:45:04.665932  535088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.581842537s)
	I1101 08:45:04.665965  535088 crio.go:469] duration metric: took 1.582007439s to extract the tarball
	I1101 08:45:04.665976  535088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:45:04.707682  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:04.751036  535088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:45:04.751073  535088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:45:04.751085  535088 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1101 08:45:04.751212  535088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-994396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:45:04.751302  535088 ssh_runner.go:195] Run: crio config
	I1101 08:45:04.801702  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:04.801733  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:04.801758  535088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:45:04.801791  535088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-994396 NodeName:addons-994396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:45:04.801978  535088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-994396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:45:04.802066  535088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:45:04.814571  535088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:45:04.814653  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:45:04.826605  535088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:45:04.846937  535088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:45:04.868213  535088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:45:04.888962  535088 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1101 08:45:04.893299  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:04.908547  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:05.049704  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:05.081089  535088 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396 for IP: 192.168.39.195
	I1101 08:45:05.081124  535088 certs.go:195] generating shared ca certs ...
	I1101 08:45:05.081146  535088 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.081312  535088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 08:45:05.135626  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt ...
	I1101 08:45:05.135669  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt: {Name:mk42d9a91568201fc7bb838317bb109a9d557e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.135920  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key ...
	I1101 08:45:05.135935  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key: {Name:mk8868035ca874da4b6bcd8361c76f97522a09dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.136031  535088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 08:45:05.223112  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt ...
	I1101 08:45:05.223159  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt: {Name:mk17c24c1e5b8188202459729e4a5c2f9a4008a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223343  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key ...
	I1101 08:45:05.223356  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key: {Name:mk64bb220f00b339bafb0b18442258c31c6af7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223432  535088 certs.go:257] generating profile certs ...
	I1101 08:45:05.223509  535088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key
	I1101 08:45:05.223524  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt with IP's: []
	I1101 08:45:05.791770  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt ...
	I1101 08:45:05.791805  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: {Name:mk739df015c10897beee55b57aac6a9687c49aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.791993  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key ...
	I1101 08:45:05.792008  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key: {Name:mk22e303787fbf3b8945b47ac917db338129138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.792086  535088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58
	I1101 08:45:05.792105  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1101 08:45:05.964688  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 ...
	I1101 08:45:05.964721  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58: {Name:mkc85c65639cbe37cb2f18c20238504fe651c568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964892  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 ...
	I1101 08:45:05.964917  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58: {Name:mk0a07f1288d6c9ced8ef2d4bb53cbfce6f3c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964998  535088 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt
	I1101 08:45:05.965075  535088 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key
	I1101 08:45:05.965124  535088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key
	I1101 08:45:05.965142  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt with IP's: []
	I1101 08:45:06.097161  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt ...
	I1101 08:45:06.097197  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt: {Name:mke456d45c85355b327c605777e7e939bd178f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097374  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key ...
	I1101 08:45:06.097388  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key: {Name:mk96b8f9598bf40057b4d6b2c6e97a30a363b3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097558  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:45:06.097602  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:45:06.097627  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:45:06.097651  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 08:45:06.098363  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:45:06.130486  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:45:06.160429  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:45:06.189962  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:45:06.219452  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:45:06.250552  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:45:06.282860  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:45:06.313986  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:45:06.344383  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:45:06.376611  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:45:06.399751  535088 ssh_runner.go:195] Run: openssl version
	I1101 08:45:06.406933  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:45:06.421716  535088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427410  535088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427478  535088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.435363  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:45:06.449854  535088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:45:06.455299  535088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:45:06.455368  535088 kubeadm.go:401] StartCluster: {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:45:06.455464  535088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:45:06.455528  535088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:45:06.499318  535088 cri.go:89] found id: ""
	I1101 08:45:06.499395  535088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:45:06.513696  535088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:45:06.527370  535088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:45:06.541099  535088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:45:06.541122  535088 kubeadm.go:158] found existing configuration files:
	
	I1101 08:45:06.541170  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:45:06.553610  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:45:06.553677  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:45:06.567384  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:45:06.580377  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:45:06.580444  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:45:06.593440  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.605393  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:45:06.605460  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.618978  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:45:06.631411  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:45:06.631487  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:45:06.645452  535088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:45:06.719122  535088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:45:06.719190  535088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:45:06.829004  535088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:45:06.829160  535088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:45:06.829291  535088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:45:06.841691  535088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:45:06.866137  535088 out.go:252]   - Generating certificates and keys ...
	I1101 08:45:06.866269  535088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:45:06.866364  535088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:45:07.164883  535088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:45:07.767615  535088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:45:08.072088  535088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:45:08.514870  535088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:45:08.646331  535088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:45:08.646504  535088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.781122  535088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:45:08.781335  535088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.899420  535088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:45:09.007181  535088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:45:09.224150  535088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:45:09.224224  535088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:45:09.511033  535088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:45:09.752693  535088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:45:09.819463  535088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:45:10.005082  535088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:45:10.463552  535088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:45:10.464025  535088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:45:10.466454  535088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:45:10.471575  535088 out.go:252]   - Booting up control plane ...
	I1101 08:45:10.471714  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:45:10.471809  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:45:10.471913  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:45:10.490781  535088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:45:10.491002  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:45:10.498306  535088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:45:10.498812  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:45:10.498893  535088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:45:10.686796  535088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:45:10.686991  535088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:45:11.697343  535088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.005207328s
	I1101 08:45:11.699752  535088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:45:11.699949  535088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1101 08:45:11.700150  535088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:45:11.704134  535088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:45:13.981077  535088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.280860487s
	I1101 08:45:15.371368  535088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.67283221s
	I1101 08:45:17.198417  535088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501722237s
	I1101 08:45:17.211581  535088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:45:17.231075  535088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:45:17.253882  535088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:45:17.254137  535088 kubeadm.go:319] [mark-control-plane] Marking the node addons-994396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:45:17.268868  535088 kubeadm.go:319] [bootstrap-token] Using token: f9fr0l.j77e5jevkskl9xb5
	I1101 08:45:17.270121  535088 out.go:252]   - Configuring RBAC rules ...
	I1101 08:45:17.270326  535088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:45:17.277792  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:45:17.293695  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:45:17.296955  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:45:17.300284  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:45:17.303890  535088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:45:17.605222  535088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:45:18.065761  535088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:45:18.604676  535088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:45:18.605674  535088 kubeadm.go:319] 
	I1101 08:45:18.605802  535088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:45:18.605830  535088 kubeadm.go:319] 
	I1101 08:45:18.605992  535088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:45:18.606023  535088 kubeadm.go:319] 
	I1101 08:45:18.606068  535088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:45:18.606156  535088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:45:18.606234  535088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:45:18.606243  535088 kubeadm.go:319] 
	I1101 08:45:18.606321  535088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:45:18.606330  535088 kubeadm.go:319] 
	I1101 08:45:18.606402  535088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:45:18.606415  535088 kubeadm.go:319] 
	I1101 08:45:18.606489  535088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:45:18.606605  535088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:45:18.606702  535088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:45:18.606712  535088 kubeadm.go:319] 
	I1101 08:45:18.606815  535088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:45:18.606947  535088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:45:18.606965  535088 kubeadm.go:319] 
	I1101 08:45:18.607067  535088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607196  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 08:45:18.607233  535088 kubeadm.go:319] 	--control-plane 
	I1101 08:45:18.607244  535088 kubeadm.go:319] 
	I1101 08:45:18.607366  535088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:45:18.607389  535088 kubeadm.go:319] 
	I1101 08:45:18.607497  535088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607642  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 08:45:18.609590  535088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:45:18.609615  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:18.609625  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:18.611467  535088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:45:18.612559  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:45:18.629659  535088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:45:18.653188  535088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:45:18.653266  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:18.653283  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-994396 minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-994396 minikube.k8s.io/primary=true
	I1101 08:45:18.823964  535088 ops.go:34] apiserver oom_adj: -16
	I1101 08:45:18.824003  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.324429  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.824169  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.324357  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.825065  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.324643  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.824929  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.325055  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.824179  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.324346  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.422037  535088 kubeadm.go:1114] duration metric: took 4.768840437s to wait for elevateKubeSystemPrivileges
	I1101 08:45:23.422092  535088 kubeadm.go:403] duration metric: took 16.966730014s to StartCluster
	I1101 08:45:23.422117  535088 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.422289  535088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:45:23.422848  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.423145  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:45:23.423170  535088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:45:23.423239  535088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:45:23.423378  535088 addons.go:70] Setting yakd=true in profile "addons-994396"
	I1101 08:45:23.423402  535088 addons.go:239] Setting addon yakd=true in "addons-994396"
	I1101 08:45:23.423420  535088 addons.go:70] Setting inspektor-gadget=true in profile "addons-994396"
	I1101 08:45:23.423440  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.423457  535088 addons.go:239] Setting addon inspektor-gadget=true in "addons-994396"
	I1101 08:45:23.423459  535088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423473  535088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-994396"
	I1101 08:45:23.423435  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423491  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423507  535088 addons.go:70] Setting registry=true in profile "addons-994396"
	I1101 08:45:23.423518  535088 addons.go:239] Setting addon registry=true in "addons-994396"
	I1101 08:45:23.423522  535088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423539  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423555  535088 addons.go:70] Setting cloud-spanner=true in profile "addons-994396"
	I1101 08:45:23.423568  535088 addons.go:239] Setting addon cloud-spanner=true in "addons-994396"
	I1101 08:45:23.423606  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423731  535088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-994396"
	I1101 08:45:23.423760  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-994396"
	I1101 08:45:23.424125  535088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-994396"
	I1101 08:45:23.424214  535088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:23.424248  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423443  535088 addons.go:70] Setting metrics-server=true in profile "addons-994396"
	I1101 08:45:23.424283  535088 addons.go:239] Setting addon metrics-server=true in "addons-994396"
	I1101 08:45:23.424313  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423545  535088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-994396"
	I1101 08:45:23.424411  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424496  535088 addons.go:70] Setting ingress=true in profile "addons-994396"
	I1101 08:45:23.423498  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424512  535088 addons.go:239] Setting addon ingress=true in "addons-994396"
	I1101 08:45:23.424544  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425045  535088 addons.go:70] Setting registry-creds=true in profile "addons-994396"
	I1101 08:45:23.425074  535088 addons.go:239] Setting addon registry-creds=true in "addons-994396"
	I1101 08:45:23.425105  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425174  535088 addons.go:70] Setting volcano=true in profile "addons-994396"
	I1101 08:45:23.425210  535088 addons.go:239] Setting addon volcano=true in "addons-994396"
	I1101 08:45:23.425245  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423474  535088 addons.go:70] Setting default-storageclass=true in profile "addons-994396"
	I1101 08:45:23.425528  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-994396"
	I1101 08:45:23.425555  535088 addons.go:70] Setting gcp-auth=true in profile "addons-994396"
	I1101 08:45:23.425587  535088 addons.go:70] Setting volumesnapshots=true in profile "addons-994396"
	I1101 08:45:23.425594  535088 mustload.go:66] Loading cluster: addons-994396
	I1101 08:45:23.425605  535088 addons.go:239] Setting addon volumesnapshots=true in "addons-994396"
	I1101 08:45:23.425629  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425759  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.426001  535088 addons.go:70] Setting storage-provisioner=true in profile "addons-994396"
	I1101 08:45:23.426034  535088 addons.go:239] Setting addon storage-provisioner=true in "addons-994396"
	I1101 08:45:23.426060  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.426263  535088 addons.go:70] Setting ingress-dns=true in profile "addons-994396"
	I1101 08:45:23.426312  535088 addons.go:239] Setting addon ingress-dns=true in "addons-994396"
	I1101 08:45:23.426349  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.428071  535088 out.go:179] * Verifying Kubernetes components...
	I1101 08:45:23.430376  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:23.432110  535088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:45:23.432211  535088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:45:23.432239  535088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:45:23.432548  535088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-994396"
	I1101 08:45:23.433347  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.433599  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:45:23.433622  535088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:45:23.434372  535088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:45:23.434372  535088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:45:23.434372  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:45:23.434399  535088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	W1101 08:45:23.434936  535088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:45:23.434947  535088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:45:23.434397  535088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:45:23.435739  535088 addons.go:239] Setting addon default-storageclass=true in "addons-994396"
	I1101 08:45:23.435133  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.435780  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:45:23.435569  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.436246  535088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:45:23.436291  535088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:23.437459  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:45:23.436270  535088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:23.437541  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:45:23.437032  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:45:23.437636  535088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:45:23.437844  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:45:23.437918  535088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:45:23.437851  535088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:45:23.437941  535088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:23.438856  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:45:23.437976  535088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:23.438988  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:45:23.439032  535088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:45:23.439073  535088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:45:23.439539  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:45:23.439090  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:45:23.439094  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:45:23.439317  535088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:23.439929  535088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:45:23.439932  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:45:23.439957  535088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:45:23.439990  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:23.440001  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:45:23.440144  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:23.440159  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:45:23.442297  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.442308  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:45:23.442298  535088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:45:23.443272  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443791  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443933  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:23.443957  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:45:23.444059  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:23.444083  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:45:23.444856  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.444941  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.445160  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:45:23.445705  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.446038  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.446083  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.446929  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.448105  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:45:23.448713  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.449090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450028  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450296  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450327  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450341  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450369  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450600  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:45:23.451017  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451085  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451162  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451241  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451823  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.451855  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452155  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452274  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452437  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.452519  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452542  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452550  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452567  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452769  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453008  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453181  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453204  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453341  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:45:23.453485  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453526  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453547  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453582  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453698  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453748  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453776  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453961  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454247  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454637  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454592  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454668  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454765  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454810  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454640  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454828  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454953  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455189  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455476  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455511  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455565  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455603  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455714  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455949  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:45:23.456005  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.457369  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:45:23.457390  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:45:23.460387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.460852  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.460874  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.461072  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	W1101 08:45:23.763758  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763807  535088 retry.go:31] will retry after 294.020846ms: ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	W1101 08:45:23.763891  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763941  535088 retry.go:31] will retry after 247.932093ms: ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.987612  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:23.987618  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:45:24.391549  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:45:24.391592  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:45:24.396118  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:24.428988  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:45:24.429026  535088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:45:24.539937  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:24.542018  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:24.551067  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:24.578439  535088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:45:24.578476  535088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:45:24.590870  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:24.593597  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:45:24.593630  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:45:24.648891  535088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:24.648945  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:45:24.654530  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:24.691639  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:24.775174  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:24.894476  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:45:24.894518  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:45:25.110719  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:45:25.110755  535088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:45:25.248567  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:45:25.248606  535088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:45:25.251834  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:25.279634  535088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.279661  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:45:25.282613  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:25.356642  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:45:25.356672  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:45:25.596573  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:45:25.596609  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:45:25.610846  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:45:25.610885  535088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:45:25.674735  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.705462  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:25.705495  535088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:45:25.746878  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:45:25.746929  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:45:25.925617  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:25.925645  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:45:25.996036  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:45:25.996070  535088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:45:26.051328  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:26.240447  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:45:26.240483  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:45:26.408185  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:26.436460  535088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:26.436501  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:45:26.557448  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:45:26.557481  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:45:26.856571  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:27.059648  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:45:27.059683  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:45:27.286113  535088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.298454996s)
	I1101 08:45:27.286197  535088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.298476587s)
	I1101 08:45:27.286240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.890088886s)
	I1101 08:45:27.286229  535088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:45:27.286918  535088 node_ready.go:35] waiting up to 6m0s for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312278  535088 node_ready.go:49] node "addons-994396" is "Ready"
	I1101 08:45:27.312325  535088 node_ready.go:38] duration metric: took 25.37676ms for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312346  535088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:45:27.312422  535088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:27.686576  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:45:27.686612  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:45:27.792267  535088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-994396" context rescaled to 1 replicas
	I1101 08:45:28.140990  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:45:28.141032  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:45:28.704311  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:45:28.704352  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:45:29.292401  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:45:29.292429  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:45:29.854708  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:29.854740  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:45:30.288568  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:30.575091  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033025614s)
	I1101 08:45:30.862016  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:45:30.865323  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.865769  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:30.865797  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.866047  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:31.632521  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:45:31.806924  535088 addons.go:239] Setting addon gcp-auth=true in "addons-994396"
	I1101 08:45:31.807015  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:31.809359  535088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:45:31.813090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814762  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:31.814801  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814989  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:33.008057  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.456928918s)
	I1101 08:45:33.008164  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.417239871s)
	I1101 08:45:33.008205  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.35364594s)
	I1101 08:45:33.008240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.316568456s)
	I1101 08:45:33.008302  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.233079465s)
	I1101 08:45:33.008386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.756527935s)
	I1101 08:45:33.008524  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725858558s)
	I1101 08:45:33.008553  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.333786806s)
	W1101 08:45:33.008563  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008566  535088 addons.go:480] Verifying addon registry=true in "addons-994396"
	I1101 08:45:33.008586  535088 retry.go:31] will retry after 241.480923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008638  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.957281467s)
	I1101 08:45:33.008733  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.600492861s)
	I1101 08:45:33.008738  535088 addons.go:480] Verifying addon metrics-server=true in "addons-994396"
	I1101 08:45:33.010227  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.470250108s)
	I1101 08:45:33.010253  535088 addons.go:480] Verifying addon ingress=true in "addons-994396"
	I1101 08:45:33.011210  535088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-994396 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:45:33.011218  535088 out.go:179] * Verifying registry addon...
	I1101 08:45:33.012250  535088 out.go:179] * Verifying ingress addon...
	I1101 08:45:33.014024  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:45:33.015512  535088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:45:33.051723  535088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:45:33.051749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.051812  535088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:45:33.051833  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:45:33.111540  535088 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 08:45:33.250325  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:33.619402  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.619673  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:33.847569  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.990948052s)
	I1101 08:45:33.847595  535088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.535150405s)
	I1101 08:45:33.847621  535088 api_server.go:72] duration metric: took 10.424417181s to wait for apiserver process to appear ...
	I1101 08:45:33.847629  535088 api_server.go:88] waiting for apiserver healthz status ...
	W1101 08:45:33.847626  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.847652  535088 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1101 08:45:33.847651  535088 retry.go:31] will retry after 218.125549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.908865  535088 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1101 08:45:33.910593  535088 api_server.go:141] control plane version: v1.34.1
	I1101 08:45:33.910629  535088 api_server.go:131] duration metric: took 62.993472ms to wait for apiserver health ...
	I1101 08:45:33.910638  535088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:45:33.979264  535088 system_pods.go:59] 17 kube-system pods found
	I1101 08:45:33.979341  535088 system_pods.go:61] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:33.979358  535088 system_pods.go:61] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979373  535088 system_pods.go:61] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979381  535088 system_pods.go:61] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:33.979388  535088 system_pods.go:61] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:33.979401  535088 system_pods.go:61] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:33.979413  535088 system_pods.go:61] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:33.979421  535088 system_pods.go:61] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:33.979431  535088 system_pods.go:61] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:33.979438  535088 system_pods.go:61] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:33.979452  535088 system_pods.go:61] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:33.979468  535088 system_pods.go:61] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:33.979480  535088 system_pods.go:61] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:33.979501  535088 system_pods.go:61] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:33.979512  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending
	I1101 08:45:33.979522  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:33.979531  535088 system_pods.go:61] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:33.979545  535088 system_pods.go:74] duration metric: took 68.899123ms to wait for pod list to return data ...
	I1101 08:45:33.979563  535088 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:45:34.005592  535088 default_sa.go:45] found service account: "default"
	I1101 08:45:34.005620  535088 default_sa.go:55] duration metric: took 26.049347ms for default service account to be created ...
	I1101 08:45:34.005631  535088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:45:34.029039  535088 system_pods.go:86] 17 kube-system pods found
	I1101 08:45:34.029088  535088 system_pods.go:89] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:34.029098  535088 system_pods.go:89] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029109  535088 system_pods.go:89] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029116  535088 system_pods.go:89] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:34.029123  535088 system_pods.go:89] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:34.029128  535088 system_pods.go:89] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:34.029139  535088 system_pods.go:89] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:34.029144  535088 system_pods.go:89] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:34.029150  535088 system_pods.go:89] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:34.029156  535088 system_pods.go:89] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:34.029165  535088 system_pods.go:89] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:34.029173  535088 system_pods.go:89] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:34.029184  535088 system_pods.go:89] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:34.029194  535088 system_pods.go:89] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:34.029202  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:45:34.029211  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:34.029232  535088 system_pods.go:89] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:34.029244  535088 system_pods.go:126] duration metric: took 23.605903ms to wait for k8s-apps to be running ...
	I1101 08:45:34.029259  535088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:45:34.029328  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:34.057589  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.060041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:34.066143  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:34.536703  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.540613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.033279  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.057492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.517382  535088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.707985766s)
	I1101 08:45:35.519009  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:35.519008  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.230381443s)
	I1101 08:45:35.519151  535088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:35.520249  535088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:45:35.521386  535088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:45:35.522322  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:45:35.523075  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:45:35.523091  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:45:35.574185  535088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:45:35.574221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:35.574179  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.589220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.670403  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:45:35.670443  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:45:35.926227  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:35.926260  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:45:36.028744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.029084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.032411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:36.103812  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:36.521069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.523012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.530349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.024569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.026839  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.029801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.202891  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.952517264s)
	W1101 08:45:37.202946  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.202972  535088 retry.go:31] will retry after 301.106324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.203012  535088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.173650122s)
	I1101 08:45:37.203055  535088 system_svc.go:56] duration metric: took 3.173789622s WaitForService to wait for kubelet
	I1101 08:45:37.203071  535088 kubeadm.go:587] duration metric: took 13.779865062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:45:37.203102  535088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:45:37.208388  535088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:45:37.208416  535088 node_conditions.go:123] node cpu capacity is 2
	I1101 08:45:37.208429  535088 node_conditions.go:105] duration metric: took 5.320357ms to run NodePressure ...
	I1101 08:45:37.208441  535088 start.go:242] waiting for startup goroutines ...
	I1101 08:45:37.368099  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.301889566s)
	I1101 08:45:37.504488  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:37.521079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.521246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.528201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.991386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887518439s)
	I1101 08:45:37.992795  535088 addons.go:480] Verifying addon gcp-auth=true in "addons-994396"
	I1101 08:45:37.995595  535088 out.go:179] * Verifying gcp-auth addon...
	I1101 08:45:37.997651  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:45:38.013086  535088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:45:38.013118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.028095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.030768  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.041146  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:38.502928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.520170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.521930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.526766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.004207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.019028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.024223  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.031869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.206009  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.701470957s)
	W1101 08:45:39.206061  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.206085  535088 retry.go:31] will retry after 556.568559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.503999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.527340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.763081  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:40.006287  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.021411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.025825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.028609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:40.507622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.523293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.527164  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.530886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.005619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.021779  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.023058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.028879  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.134842  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371696885s)
	W1101 08:45:41.134889  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.134933  535088 retry.go:31] will retry after 634.404627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.501998  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.519483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.522699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.527571  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.769910  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:42.004958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.021144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.021931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.027195  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.501545  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.519865  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.522754  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.526903  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.775680  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00572246s)
	W1101 08:45:42.775745  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:42.775781  535088 retry.go:31] will retry after 1.084498807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:43.002944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.020356  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.020475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.134004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.504736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.519636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.520489  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.525810  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.861263  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:44.001829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.019292  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.026202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:44.503149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.520624  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.520651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.526211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:45:44.623495  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:44.623540  535088 retry.go:31] will retry after 1.856024944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:45.001600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.020242  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.022140  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.026024  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:45.507084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.523761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.524237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.529475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.005033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.108846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.109151  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.109369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.479732  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:46.503499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.520286  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.526234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.529155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.019094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.023015  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.027997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.507760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.519999  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.524925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.528391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.666049  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186267383s)
	W1101 08:45:47.666140  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:47.666174  535088 retry.go:31] will retry after 4.139204607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:48.003042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.019125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.027235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:48.031596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.722743  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.727291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.727372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.727610  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.004382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.019147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.021814  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.026878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:49.504442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.517916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.520088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.001964  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.024108  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.024120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.029503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.504014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.523676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.527259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.529569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.002796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.022756  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.022985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.026836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.501595  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.523272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.526829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.530749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.806085  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:52.003559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.019381  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.019451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.027431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:52.504756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.522177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.526818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.531367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.001310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.018845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.024989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.029380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.104383  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.298241592s)
	W1101 08:45:53.104437  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.104469  535088 retry.go:31] will retry after 2.354213604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.521260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.521459  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.530531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.465678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.465798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.466036  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:54.466159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.562014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.562454  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.001120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.025479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.025582  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.026324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:55.460012  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:55.504349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.519300  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.521013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.527541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.002846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.025053  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.029411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.032019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.575604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.575734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.577952  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.577981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.753301  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.293228646s)
	W1101 08:45:56.753349  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:56.753376  535088 retry.go:31] will retry after 4.355574242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:57.006174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.021087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.023942  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.029154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:57.505515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.520197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.523156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.018201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.022518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.025296  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.505701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.524023  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.526483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.536508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.001410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.017471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.020442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.025457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.501507  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.519043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.520094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.525760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.001248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.017563  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.020984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.026549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.501281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.519844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.521324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.525700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.001953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.020105  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.020877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.025885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.110059  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:01.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.519377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.523178  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.526440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:01.845885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:01.845957  535088 retry.go:31] will retry after 7.871379914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:02.001335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.019157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.021487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.026236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:02.502141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.517119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.519718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.526453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.002138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.017025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.019806  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.026770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.502833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.520032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.520118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.526559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.064971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.065055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.068066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.068526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.502308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.520197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.521585  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.526046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.003330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.017484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.026496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.501222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.517839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.520724  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.001368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.019614  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.020124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.025568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.500972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.518736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.520211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.526135  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.002092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.018836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.020757  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.025238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.503063  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.517984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.519990  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.528565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.018162  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.020563  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.026357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.501444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.517337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.519389  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.525929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.002578  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.018521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.020246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.026866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.518157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.519720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.527087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.718336  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:10.004096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.021038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.021333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.027767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:10.413712  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.413760  535088 retry.go:31] will retry after 19.114067213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.517730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.520404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.526363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.002849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.019496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.019995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.026025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.501655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.518007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.521219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.525426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.000873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.017867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.020240  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.026060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.502263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.518472  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.526084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.002272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.025249  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.501457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.518992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.520857  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.526486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.000572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.019408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.020492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.025038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.501826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.518060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.520198  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.526075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.002744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.018115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.019636  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.025834  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.501625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.518152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.519669  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.525079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.001990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.021114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.022918  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.025425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.501061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.519200  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.519212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.525882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.002326  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.017673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.020197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.026945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.502364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.518476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.520804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.526128  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.004541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.020439  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.028122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.502479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.519387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.519499  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.003038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.019735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.020844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.027661  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.519280  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.001793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.018442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.019878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.501246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.520476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.520774  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.525872  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.018221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.019989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.025817  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.501814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.518070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.520290  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.526096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.002018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.019705  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.021053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.026071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.501728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.519405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.520617  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.001744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.019715  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.020644  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.025597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.502175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.519303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.520222  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.526675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.001582  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.018997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.020524  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.025085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.501770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.519601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.520468  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.525222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.002719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.018650  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.020825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.026802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.501690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.517716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.520832  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.525983  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.002212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.017751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.019488  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.025775  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.501873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.519825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.526640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.001148  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.019815  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.025796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.502066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.518977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.520625  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.527501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.000982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.018045  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.019539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.026321  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.502967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.517882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.520453  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.525074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.002093  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.019794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.021920  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.025114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.502294  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.517914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.519213  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.526478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.528534  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:30.001669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.023801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.027674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.029691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:30.252885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.252962  535088 retry.go:31] will retry after 26.857733331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.501958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.518713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.001425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.019226  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.020064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.501882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.518669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.519450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.526794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.001295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.018253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.026067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.501521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.520301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.522051  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.526250  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.003215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.018591  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.020188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.026759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.518399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.520442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.526258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.001781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.019409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.026569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.501910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.518388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.526549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.002205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.019931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.501124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.517626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.519260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.526635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.001556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.017651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.020209  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.026600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.501047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.520391  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.001745  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.017677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.019854  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.504677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.518518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.519504  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.527753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.020360  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.026665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.501370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.517442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.519287  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.525990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.017774  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.019461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.026372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.500859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.519797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.520622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.525917  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.001647  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.017652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.019113  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.025818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.518504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.520340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.526037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.002231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.017533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.019687  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.025641  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.501410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.518018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.527062  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.018556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.020009  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.501909  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.519346  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.521539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.525544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.003422  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.020340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.026621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.501787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.517772  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.520385  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.526006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.001729  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.018572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.020505  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.027512  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.500861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.517878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.519941  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.525966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.002733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.022017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.023425  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.027913  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.501505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.518036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.518304  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.526497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.000839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.018027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.020574  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.025140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.517267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.519576  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.002664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.019029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.020440  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.026307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.502751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.518532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.525668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.001531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.017987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.018860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.501993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.519439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.520680  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.525869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.003110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.020088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.020281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.518761  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.520450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.526669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.019111  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.020657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.025651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.501137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.519077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.519422  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.526050  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.002264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.017514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.020444  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.026653  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.501218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.517606  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.519711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.525538  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.019403  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.027381  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.501030  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.519679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.520880  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.525311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.002074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.017920  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.020689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.025485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.501565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.518005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.518985  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.525510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.018972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.501041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.519696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.520156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.526253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.003167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.017108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.020966  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.025536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.501588  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.519412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.520387  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.526801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.001703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.019805  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.025874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.501547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.518508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.519409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.527341  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.001269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.017737  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.019765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.026345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.111554  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:57.502821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.521781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.523859  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:57.837380  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:46:57.837579  535088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:46:58.002477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.017866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.019513  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.025873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:58.501877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.518871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.519700  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.525438  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.004488  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.026436  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.031423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.033704  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.508129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.521490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.521737  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.526781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.003739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.022791  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.022910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.026491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.501517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.517703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.518550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.528527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.010322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.026679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.030087  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.030397  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.502386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.517530  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.522260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.532240  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.002156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.022137  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.023086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.026049  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.504322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.519252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.523461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.528764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.004016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.019471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.021442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.026419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.504419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.520406  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.525550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.002462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.020193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.021462  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.501642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.517490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.519930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.526445  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.005197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.023123  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.029475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.502664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.518118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.520518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.002738  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.019575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.022744  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.026515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.502554  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.519943  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.521590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.526208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.004023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.019789  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.020273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.026416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.504157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.518612  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.520773  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.527827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.007295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.020757  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.024258  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.031878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.505225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.518839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.521622  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.525366  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.003369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.024660  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.024787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.029399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.502978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.520074  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.520999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.527832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.002118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.019490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.019688  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.026021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.502365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.517980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.519426  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.000763  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.017778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.019554  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.025361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.502621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.519369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.520248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.525881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.001298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.019652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.020408  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.026077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.506179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.518698  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.520608  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.525646  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.004165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.021172  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.026558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.502399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.517614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.520163  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.526224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.002692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.018788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.502451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.519291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.520395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.528734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.001583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.017574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.019594  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.027073  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.502087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.518165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.518856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.526691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.002848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.019225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.020564  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.025778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.518991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.520609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.525245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.001845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.019346  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.019684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.026396  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.502188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.517746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.520856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.525856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.001858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.021348  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.026925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.517522  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.520124  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.525853  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.001850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.019071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.020953  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.502259  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.517542  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.520882  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.526825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.001558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.018927  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.020008  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.025511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.501320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.517732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.519487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.526814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.001370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.018101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.019530  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.501703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.519684  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.526074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.001809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.019534  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.025673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.501888  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.520695  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.521501  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.527625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.001636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.017676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.019410  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.026546  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.001469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.018681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.026297  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.500658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.517656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.520275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.526953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.002390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.018753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.021470  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.026724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.503080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.522083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.525703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.001480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.018730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.019775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.025922  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.501850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.518460  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.520597  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.526270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.002686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.017503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.019988  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.026061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.501773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.519208  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.519306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.526944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.001885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.020961  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.026254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.519603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.521180  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.526295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.003607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.018630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.021082  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.026312  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.501919  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.519736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.002036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.018828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.502329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.517607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.520177  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.527152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.003066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.020280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.020496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.026046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.503011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.519101  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.520154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.525819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.001349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.017760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.020383  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.026548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.501020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.519372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.520621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.001939  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.017981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.018721  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.025389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.502684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.519286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.519798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.526360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.001915  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.018089  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.018866  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.025884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.502109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.518315  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.520992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.001980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.020058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.020195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.502513  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.519131  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.519364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.526431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.001532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.017633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.019879  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.501267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.517441  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.519775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.526367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.002311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.017625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.025830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.502486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.518494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.519337  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.526256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.002200  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.017679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.020437  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.025635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.502121  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.518742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.519609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.525528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.001668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.017868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.019195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.027138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.502726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.518837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.519527  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.525448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.037966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.038824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.039617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.039888  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.510995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.611235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.611494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.612020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.007852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.104319  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.105167  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.106241  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.503207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.519701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.523717  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.528111  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.002832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.019368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.026027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.028968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.504592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.518781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.522913  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.527017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.021540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.022732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.027733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.501969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.523064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.523122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.526723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.016033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.048228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.048288  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.049707  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.510334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.517005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.520734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.527760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.002493  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.025067  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.025090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.030831  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.503106  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.519233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.522740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.526357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.003368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.021702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.023084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.025372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.507201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.528398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.528540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.528597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.005313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.021521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.023522  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.030205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.508306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.517975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.523254  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.528801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.004599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.018025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.024054  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.030295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.504150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.519937  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.527633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.003426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.021317  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.104457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.105285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.520941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.521038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.525762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.002168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.018353  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.019606  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.025332  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.501342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.518265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.520375  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.001482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.018509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.018674  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.026149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.502439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.518320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.519717  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.525114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.019121  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.026265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.501713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.525722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.001345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.020275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.025637  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.518670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.520663  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.525659  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.001263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.019116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.025335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.502071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.519000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.519010  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.525456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.021189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.026699  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.502333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.517379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.519350  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.526773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.001599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.018008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.020215  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.025828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.501455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.517521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.519235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.527201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.001827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.020037  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.020749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.025827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.503759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.517849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.520371  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.526800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.002360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.017843  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.026527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.501394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.520352  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.525725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.002102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.017074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.020520  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.026683  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.502383  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.517821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.520938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.525444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.004519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.104585  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.104625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.104775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.501109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.518462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.519031  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.018255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.019640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.025291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.503231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.518634  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.520274  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.526356  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.002389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.018529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.019411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.026657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.501043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.518076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.519080  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.526504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.001361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.019762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.022333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.025239  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.501714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.519163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.521149  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.526410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.000747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.019676  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.020330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.026159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.502467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.518491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.518845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.525769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.001664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.019454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.019620  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.027022  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.502850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.518666  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.520316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.526009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.002470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.017750  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.019816  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.025697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.501760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.519481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.519738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.525711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.001752  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.017749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.019804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.025660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.501792  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.519794  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.525244  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.002742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.018517  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.020369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.026630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.501587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.518305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.519219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.526380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.000977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.018805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.019761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.501890  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.517987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.520782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.525601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.001949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.018921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.020592  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.026413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.501660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.518677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.518948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.525564  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.017692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.019759  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.025724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.503245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.519474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.520078  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.525649  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.002655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.017994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.020743  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.025544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.500866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.519004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.520797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.527102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.001891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.019380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.020948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.025584  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.502039  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.519170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.520827  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.002597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.018456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.019344  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.501808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.518199  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.520114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.000809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.017935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.019860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.026010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.502293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.517549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.520189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.603271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.001815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.018392  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.020440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.025577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.501456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.517675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.519938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.525413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.000943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.017838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.021846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.026719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.502498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.518370  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.526307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.002824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.019386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.027193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.501577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.518262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.525078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.002037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.020156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.021197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.025423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.501921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.519607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.520544  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.001960  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.018434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.020315  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.026179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.518911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.520556  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.525269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.002029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.024168  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.026997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.031803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.502358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.518786  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.525830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.017338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.018324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.503054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.520388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.521916  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.526202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.002517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.020216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.021156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.028984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.500976  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.519154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.519316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.526809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.002882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.019205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.020141  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.026965  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.501036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.518337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.519991  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.525486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.018947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.019127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.025725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.501581  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.518560  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.520017  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.525518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.001825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.018331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.020369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.026403  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.501127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.519632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.520978  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.525884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.002361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.027021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.525535  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.002688  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.017322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.019838  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.025328  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.501474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.517324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.519128  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.525804  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.001640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.017615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.019699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.025407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.501333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.518228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.520320  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.526401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.001257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.017769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.019813  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.501852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.517912  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.519457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.525502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.001036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.018891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.019341  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.501891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.517945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.519845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.525477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.018364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.019047  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.025949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.501632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.517753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.519551  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.525075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.019109  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.021003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.025940  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.503032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.518866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.520801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.525566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.002115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.017835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.020583  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.026191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.502465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.517620  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.520272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.000870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.018932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.019718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.025748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.502491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.525784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.001520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.020061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.026348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.519550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.519863  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.526033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.001475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.018365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.019331  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.026308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.502572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.517421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.520211  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.525925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.001941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.019309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.020493  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.027497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.501822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.520262  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.526454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.003835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.019771  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.020317  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.025953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.501469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.517769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.519531  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.526394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.001467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.018767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.018975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.025574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.501327  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.517147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.519793  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.525870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.001711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.019756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.022733  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.025432  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.501110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.520152  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.526331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.001665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.018212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.020818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.027301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.502145  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.518137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.520139  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.002613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.018231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.019849  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.501054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.518385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.526209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.017824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.020797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.026068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.501618  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.519136  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.519498  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.526198  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.001727  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.019695  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.020007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.026210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.502382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.518209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.520090  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.526008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.002275  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.017575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.020217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.026182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.501858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.518887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.520199  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.525849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.001391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.017528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.019856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.026978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.502108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.517185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.519497  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.526193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.002439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.026369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.502252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.518245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.519830  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.525789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.002157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.017975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.020029  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.026100  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.504825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.517735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.522486  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.528548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.005615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.019305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.021640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.027410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.501443  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.519328  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.519829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.526094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.001398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.019374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.020621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.024951  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.501419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.517860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.519006  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.525945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.017274  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.019058  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.501980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.517824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.519466  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.001604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.019698  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.501302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.517854  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.519844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.526844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.017746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.020114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.519009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.520308  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.525824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.001176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.017056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.019336  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.026011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.502015  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.518868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.519785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.525794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.002253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.017282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.020639  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.026305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.518058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.519766  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.525982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.018418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.021050  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.026140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.502619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.517497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.519971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.526180  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.002367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.018215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.020881  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.025867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.502163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.518906  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.519560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.525238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.002160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.018131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.019720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.026035  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.501498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.517861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.520038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.525911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.008043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.108599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.108605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.108940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.501986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.519116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.519363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.526237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.002941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.019968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.026086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.501165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.518371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.519716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.526191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.003221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.017756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.020569  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.025532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.502303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.517833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.520043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.526299  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.001963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.019603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.020175  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.026074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.518548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.526362  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.001337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.017680  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.020642  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.025160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.501481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.519187  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.519354  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.526002  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.001164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.017266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.020018  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.025815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.501835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.518458  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.519449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.526988  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.001942  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.017559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.019230  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.027617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.501568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.518953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.519722  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.000827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.017696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.019798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.501984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.519229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.525931  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.002067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.018520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.020314  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.026702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.501478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.518992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.519109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.525577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.001061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.019049  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.019914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.025870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.501375  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.517502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.520013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.525860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.002219  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.018451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.019784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.025779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.503078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.519485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.528833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.001789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.017702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.019708  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.025298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.517966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.520785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.526958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.017726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.019345  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.026841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.501551  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.520217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.526558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.001536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.018736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.020611  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.025440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.519745  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.526510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.002283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.017864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.019800  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.025916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.502006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.519062  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.519655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.005447  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.017234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.019831  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.026557  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.501996  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.519856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.520083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.525230  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.002748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.019533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.025957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.502580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.519968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.525850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.019152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.019529  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.025144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.503036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.518401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.520738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.525739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.001970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.026543  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.505234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.517615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.520770  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.525690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.018177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.019004  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.025710  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.502094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.519521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.520380  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.526127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.002068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.020224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.021127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.025520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.501694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.518963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.520765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.525058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.007417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.024784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.025732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.520851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.521975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.528716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.002656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.019037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.501702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.521095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.526859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.002583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.020457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.025456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.502095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.518464  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.522059  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.526260  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.003337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.017841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.021116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.025850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.518762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.520412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.527410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.001848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.018927  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.019525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.501555  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.518984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.519924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.526028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.002318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.018839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.021112  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.025766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.501254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.518654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.520701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.525608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.001830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.017870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.020014  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.026744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.501677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.519613  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.519874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.526220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.002947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.019118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.020560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.025161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.501842  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.519678  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.003014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.018826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.020409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.026088  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.501916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.518127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.520850  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.525382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.001229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.017453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.019095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.026360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.502510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.517380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.518702  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.001216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.018086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.020349  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.026668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.502075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.518995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.519726  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.526262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.011176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.018083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.022218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.026390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.518961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.519981  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.525961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.002956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.018416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.020053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.026871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.503382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.518628  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.520030  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.526081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.004511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.017733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.019809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.026157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.502455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.517764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.519007  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.525748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.002201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.018354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.020561  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.024986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.501676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.518080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.520259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.526231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.002290  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.017246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.019747  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.025424  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.502256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.519181  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.519361  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.526313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.001733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.017924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.019432  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.024916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.501788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.518994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.520329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.526158  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.002306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.017816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.020329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.026122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.502214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.517689  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.519368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.526566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.001344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.018348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.021395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.026118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.502411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.519218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.519487  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.526004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.002233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.017415  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.020521  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.026057  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.518860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.520188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.526090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.002091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.018506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.019711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.025910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.502421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.518400  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.521296  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.527921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.003104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.018378  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.020878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.026161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.518686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.520170  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.525923  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.004390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.019175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.022158  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.026467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.504086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.520367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.520550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.525380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.002978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.103477  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.103494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.104185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.502233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.519809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.000496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.018444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.019039  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.026510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.502226  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.517482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.520689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.525876  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.001596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.021682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.025805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.517889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.520740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.526273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.001808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.018410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.020658  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.025282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.502482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.517540  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.520502  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.525363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.002384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.018017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.020110  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.026034  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.505672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.520527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.523748  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.529163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.002861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.017744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.019716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.025934  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.503141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.517174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.519166  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.001342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.017719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.020032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.026547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.501789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.519072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.519782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.525316  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.002325  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.017470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.021020  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.026334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.504006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.518610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.525227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.003295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.018224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.023940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.028747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.507809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.522785  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.523541  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.527593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.006856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.021835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.023449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.029978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.506277  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.523013  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.524326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.531084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.006985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.018665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.023247  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.026006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.503056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.519576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.522065  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.003139  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.020881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.022886  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.028847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.502733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.521726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.530711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.532556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.002638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.021902  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.026061  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.027811  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.501943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.518059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.520358  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.527803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.001212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.022110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.023066  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.027074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.511753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.522407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.525249  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.528427  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.003779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.019398  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.020765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.025087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.502271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.519021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.520012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.028122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.028948  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.029097  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.503552  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.519454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.526099  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.528549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.002150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.018589  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.020579  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:50:00.026070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.503019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.518818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.521298  535088 kapi.go:107] duration metric: took 4m27.50578325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:50:00.526236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.004597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.017417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.026007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.503117  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.526118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.002140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.017309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.026874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.517206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.526479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.002066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.018800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.026667  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.501870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.526907  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.001943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.018110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.026258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.503167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.518066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.526754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.007821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.017748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.025450  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.501643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.518495  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.001380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.017918  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.026946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.502671  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.518784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.526820  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.001754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.019448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.502164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.517678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.526283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.002858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.019273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.027420  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.501670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.518047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.526214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.001840  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.018206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.027687  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.501188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.526417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.001069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.018157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.026212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.502289  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.518055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.526968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.001635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.017991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.025970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.506621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.517412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.001701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.018119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.025969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.502625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.517475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.526044  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.002186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.018439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.026091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.519505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.525838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.018285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.027576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.501280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.517529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.526733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.002377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.018228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.502885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.517651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.527123  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.001756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.018508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.026298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.503500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.526229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.005499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.105592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.105644  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.501723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.518760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.525930  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.009252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.020798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.026084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.502008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.518188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.526054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.001524  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.017526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.026186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.501501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.517658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.526525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.001537  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.017379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.027037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.501883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.518635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.525619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.001489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.026672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.501586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.517885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.526477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.000991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.019224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.027309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.502253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.526007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.002357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.017858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.027027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.500869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.517747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.526047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.002561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.018227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.027043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.502430  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.518125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.526108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.002567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.017833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.025933  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.517859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.526354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.000814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.017887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.026568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.502946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.518678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.526480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.001266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.017216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.026609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.501961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.526911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.002183  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.017072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.026509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.503467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.525800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.001730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.018081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.026318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.503000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.518477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.525663  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.001609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.018380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.027170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.502338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.518067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.526337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.001716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.019042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.026553  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.502516  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.517742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.526076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.003220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.017115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.026003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.503084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.520638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.525815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.002310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.017855  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.026358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.501484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.518215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.527345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.001194  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.018531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.026371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.518822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.526665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.000987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.018881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.026261  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.503065  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.519434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.002048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.019887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.026789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.502205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.527132  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.027636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.502137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.518679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.526770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.002674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.018502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.025131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.502841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.518479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.525394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.003210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.017479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.026633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.501409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.517624  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.525765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.001504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.017795  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.026635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.504580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.518573  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.526384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.000864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.018489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.025191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.501782  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.518173  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.526463  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.000518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.017873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.027131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.502017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.518539  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.526000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.002999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.018398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.027329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.501816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.518023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.526878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.002714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.018483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.026808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.502514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.517486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.525494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.000916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.017682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.026270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.504311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.517633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.529587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.005819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.019419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:46.028247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.501836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.603570  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.604017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.002957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.020722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.103677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:47.504417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.529109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.535255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.027116  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.027384  535088 kapi.go:107] duration metric: took 5m10.029733807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:50:48.029168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.029460  535088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-994396 cluster.
	I1101 08:50:48.030850  535088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:50:48.032437  535088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:50:48.524544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.531119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.018726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.026282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.518154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.526614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.018751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.026031  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.518756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.018153  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.518286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.017371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.027754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.518074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.526416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.018974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.026602  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.518144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.526654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.018625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.026704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.517492  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.525999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.019257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.027958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.518075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.526142  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.018092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.025605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.518596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.525863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.017562  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.025851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.518709  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.526387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.025978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.517643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.525642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.018664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.025863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.517006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.527349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.020576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.029108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.518333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.527511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.018504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.027157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.518405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.526704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.018500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.026694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.517768  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.526967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.018243  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.026700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.526719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.017510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.025944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.517662  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.526213  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.019140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.522889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.526826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.017784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.026272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.517992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.527109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.018586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.026175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.518974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.526376  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.018995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.026615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.517947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.526011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.018511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.025631  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.518218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.526593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.018682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.026784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.527301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.018993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.518483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.526408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.018208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.027483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.518108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.528506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.018723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.026036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.519547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.525883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.017886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.026485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.518428  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.526099  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.018816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.028223  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.517235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.019497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.026823  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.518374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.526536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.019643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.026636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.519221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.527357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.018310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.027561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.517385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.526970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.018802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.026280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.527610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.017707  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.028465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.518519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.526293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:21.026625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:21.030779  535088 kapi.go:107] duration metric: took 5m45.508455317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:51:21.518734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.018071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.517851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.022943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.518235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.018970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.517611  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.019971  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.519134  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.018419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.518767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.018701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.519283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.019085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.518032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.019182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.519048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.018264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.018124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.021956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.519959  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:33.014506  535088 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1101 08:51:33.014547  535088 kapi.go:107] duration metric: took 6m0.000528296s to wait for kubernetes.io/minikube-addons=registry ...
	W1101 08:51:33.014668  535088 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1101 08:51:33.016548  535088 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:51:33.017988  535088 addons.go:515] duration metric: took 6m9.594756816s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1101 08:51:33.018036  535088 start.go:247] waiting for cluster config update ...
	I1101 08:51:33.018057  535088 start.go:256] writing updated cluster config ...
	I1101 08:51:33.018363  535088 ssh_runner.go:195] Run: rm -f paused
	I1101 08:51:33.027702  535088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:33.035072  535088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.039692  535088 pod_ready.go:94] pod "coredns-66bc5c9577-2rqh8" is "Ready"
	I1101 08:51:33.039727  535088 pod_ready.go:86] duration metric: took 4.614622ms for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.041954  535088 pod_ready.go:83] waiting for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.046075  535088 pod_ready.go:94] pod "etcd-addons-994396" is "Ready"
	I1101 08:51:33.046103  535088 pod_ready.go:86] duration metric: took 4.126087ms for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.048189  535088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.052772  535088 pod_ready.go:94] pod "kube-apiserver-addons-994396" is "Ready"
	I1101 08:51:33.052802  535088 pod_ready.go:86] duration metric: took 4.587761ms for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.055446  535088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.433771  535088 pod_ready.go:94] pod "kube-controller-manager-addons-994396" is "Ready"
	I1101 08:51:33.433801  535088 pod_ready.go:86] duration metric: took 378.329685ms for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.634675  535088 pod_ready.go:83] waiting for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.034403  535088 pod_ready.go:94] pod "kube-proxy-fbmdq" is "Ready"
	I1101 08:51:34.034444  535088 pod_ready.go:86] duration metric: took 399.738812ms for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.233978  535088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633095  535088 pod_ready.go:94] pod "kube-scheduler-addons-994396" is "Ready"
	I1101 08:51:34.633131  535088 pod_ready.go:86] duration metric: took 399.109096ms for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633149  535088 pod_ready.go:40] duration metric: took 1.605381934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:34.682753  535088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:51:34.684612  535088 out.go:179] * Done! kubectl is now configured to use "addons-994396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.955163646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccd87df9-3d1e-4cab-8e4d-b423e9a7adaa name=/runtime.v1.RuntimeService/Version
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.956498696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5adac35-ebb5-4da1-8c02-834cbadf5be2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.957618400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987445957593479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5adac35-ebb5-4da1-8c02-834cbadf5be2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.958446001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e456730-d42d-4d33-8cd4-0f9b37566b8c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.958572289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e456730-d42d-4d33-8cd4-0f9b37566b8c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.959441766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e456730-d42d-4d33-8cd4-0f9b37566b8c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.970978469Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fccf031-e894-4800-a477-aed74f83d656 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.971478339Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a688e95ff774d333d03aeba9040f6474240997aeddde89b8afd82798cc9e706,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9c49ac5d-18e5-470b-9217-c0a58f0636a1,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987369396636901,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c49ac5d-18e5-470b-9217-c0a58f0636a1,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:56:09.077414941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d45873adc32c059e48321204580348724fe2849e18f32a716c6a20a49980c0f0,Metadata:&PodSandboxMetadata{Name:helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,Uid:e25da403-345f-40f6-b6f9-e28731089dd6,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt
:1761987329226197208,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: e25da403-345f-40f6-b6f9-e28731089dd6,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:55:28.903715198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5a1f5307a5a0e8d620f46ea3fb4500fae706cd5d81b910f9344a2dc34840763,Metadata:&PodSandboxMetadata{Name:task-pv-pod,Uid:8623da74-791e-4fd6-a974-60ebca5738a7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987164436439077,Labels:map[string]string{app: task-pv-pod,io.kubernetes.container.name: POD,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8623da74-791e-4fd6-a974-60ebca5738a7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:52:44.116093759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox
{Id:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&PodSandboxMetadata{Name:busybox,Uid:4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761987095651519563,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:51:35.327103269Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-9cxnd,Uid:bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986982879427207,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-aut
h-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:32.720554779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:cf63ab79-b3fa-4917-a62b-a0758d1521b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986738627441276,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-5f4978ffc6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917
-a62b-a0758d1521b0,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:35.497727216Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736970680925,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-576bccf57,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-at
tacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:35.165829458Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-7l7ps,Uid:a1c291ec-002e-43dc-acb1-5bc4483fa6fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736808163856,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: bfd669d76,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-
01T08:45:35.283625413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-jbkmr,Uid:19dc2ae7-668b-4952-9c2d-6602eac4449e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986736781212367,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:33.962278007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&PodSandboxMetadata{Name:snapshot-controller-7d9fbc56b8-2pbx5,Uid:e9e973a4-20dd-4785-a3d6-1557c012cc76,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,Created
At:1761986735686254069,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,pod-template-hash: 7d9fbc56b8,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:33.919600116Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&PodSandboxMetadata{Name:gadget-z8nnd,Uid:c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986732252775766,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta
.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-11-01T08:45:31.810689200Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-9ghvj,Uid:d3c3231a-40d9-42f1-bc78-e2d1a104327a,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731585408537,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:30.990687010Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,
Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a0182754-0c9c-458b-a340-20ec025cb56c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731574668336,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\
",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:30.530361901Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:d947f942-2149-492a-9b4e-1f9c22405815,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731411874379,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ing
ress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T08:45:29.770167923Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011
aa2c20b94a8e24,Metadata:&PodSandboxMetadata{Name:registry-proxy-bzs78,Uid:151e456a-63e0-4527-8511-34c4444fef48,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731364422760,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,controller-revision-hash: 65b944f647,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,kubernetes.io/minikube-addons: registry,pod-template-generation: 1,registry-proxy: true,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.495875265Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b06b6cc06bc5fa49dc1e6aa03c98e75401763147b91202b99f1d103ce1ee29d2,Metadata:&PodSandboxMetadata{Name:registry-6b586f9694-b4ph6,Uid:f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986731333681368,Labels:map[string]string{actual-registry: true,a
ddonmanager.kubernetes.io/mode: Reconcile,io.kubernetes.container.name: POD,io.kubernetes.pod.name: registry-6b586f9694-b4ph6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c,kubernetes.io/minikube-addons: registry,pod-template-hash: 6b586f9694,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:29.152437473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-vssmp,Uid:a3b8c16e-b583-47df-a5c2-97218d3ec5be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986727049009432,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:26.718957327Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2rqh8,Uid:b131b2b2-f9b9-4197-8bc7-4d1bc185c804,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986724017093656,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.654384746Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&PodSandboxMetadata{Name:kube-proxy-fbmdq,Uid:dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1761986723855325038,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T08:45:23.475753329Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&PodSandboxMetadata{Name:etcd-addons-994396,Uid:31d081dd6df6b55662a095a017ad5712,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711956221288,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes
.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: 31d081dd6df6b55662a095a017ad5712,kubernetes.io/config.seen: 2025-11-01T08:45:11.165275870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-994396,Uid:5912e2b5f9c4192157a57bf3d5021f7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711949626239,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5912e2b5f9c4192157a57bf3d5021f7e,kubernetes.io/config.seen: 2025-11-01T08:45:11.165273714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4cfa30f1a32a450d85f5137032
3574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-994396,Uid:e0eeda84be59c6c1c023d04bf2f88758,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711947877914,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0eeda84be59c6c1c023d04bf2f88758,kubernetes.io/config.seen: 2025-11-01T08:45:11.165274783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-994396,Uid:abcff5cb337834c6fd7a11d68a6b7be4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761986711944495415,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name
: POD,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/config.hash: abcff5cb337834c6fd7a11d68a6b7be4,kubernetes.io/config.seen: 2025-11-01T08:45:11.165269521Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4fccf031-e894-4800-a477-aed74f83d656 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.972617921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f625037b-dfa4-460b-916e-2573af77cec8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.973057572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f625037b-dfa4-460b-916e-2573af77cec8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:25 addons-994396 crio[817]: time="2025-11-01 08:57:25.974035086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-
9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761986758449407963,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress
-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:176
1986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:C
ONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNI
NG,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481
c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,PodSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a
1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42b
b8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10
257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f625037b-dfa4-460b-916e-2573af77cec8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.002203753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=129bbcc0-23e6-463a-a614-c753ffc47b5d name=/runtime.v1.RuntimeService/Version
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.002332035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=129bbcc0-23e6-463a-a614-c753ffc47b5d name=/runtime.v1.RuntimeService/Version
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.004290947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=377e71ef-cab7-4770-9328-06ff2cbef1aa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.005457888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987446005427909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=377e71ef-cab7-4770-9328-06ff2cbef1aa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.007122513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3cf6947-1432-43e7-bd44-973f602a47df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.007204769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3cf6947-1432-43e7-bd44-973f602a47df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.007673980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3cf6947-1432-43e7-bd44-973f602a47df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.047450444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fca97ace-adf3-429d-8394-2b1241c839b0 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.047843219Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fca97ace-adf3-429d-8394-2b1241c839b0 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.050016750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1d804df-ccb2-442c-97c2-9328b8a6c592 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.051296663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987446051270799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1d804df-ccb2-442c-97c2-9328b8a6c592 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.052059826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fcd889c-3039-44f1-acdd-d4a74b195d0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.052311319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fcd889c-3039-44f1-acdd-d4a74b195d0e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:57:26 addons-994396 crio[817]: time="2025-11-01 08:57:26.052863037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fcd889c-3039-44f1-acdd-d4a74b195d0e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9aac7eb346903       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   cdbcecc3e9d43       busybox
	8c914a21ca5c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	437ef3bce50ac       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	f73cee1644b03       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             7 minutes ago       Running             controller                               0                   147663b03fe63       ingress-nginx-controller-675c5ddd98-9cxnd
	862808e2ff30f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	a4eac7bee2514       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           8 minutes ago       Running             hostpath                                 0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	89e19f39781eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                8 minutes ago       Running             node-driver-registrar                    0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	68bf99b640c16       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              9 minutes ago       Running             csi-resizer                              0                   c090988aa5e05       csi-hostpath-resizer-0
	39137378c3801       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             9 minutes ago       Running             csi-attacher                             0                   6eaf5e212ad1c       csi-hostpath-attacher-0
	80b7ac026d755       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   5ef1abbd77f24       snapshot-controller-7d9fbc56b8-jbkmr
	a63011b6ec66f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      9 minutes ago       Running             volume-snapshot-controller               0                   eeeab7772fb0e       snapshot-controller-7d9fbc56b8-2pbx5
	6e0352b147e8a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   9 minutes ago       Running             csi-external-health-monitor-controller   0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	7fbb154c5ba00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   9 minutes ago       Exited              patch                                    0                   058d4f2c90db7       ingress-nginx-admission-patch-dmt9r
	5e6c68a57ee53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   9 minutes ago       Exited              create                                   0                   a5dfb28615faf       ingress-nginx-admission-create-6ptqs
	6d2226436f827       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              10 minutes ago      Running             registry-proxy                           0                   c449271f0824b       registry-proxy-bzs78
	dda41d22ea7ff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            10 minutes ago      Running             gadget                                   0                   e07af8e7a3eca       gadget-z8nnd
	9b56bd6c195bd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             10 minutes ago      Running             local-path-provisioner                   0                   6d69749ca9bc7       local-path-provisioner-648f6765c9-9ghvj
	7b4c1be283a7f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               11 minutes ago      Running             minikube-ingress-dns                     0                   9f7ac0dd48cc1       kube-ingress-dns-minikube
	2ad7748982f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             11 minutes ago      Running             storage-provisioner                      0                   ca1dd787f338a       storage-provisioner
	9bb5f4d4e768d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     11 minutes ago      Running             amd-gpu-device-plugin                    0                   fec37181f6706       amd-gpu-device-plugin-vssmp
	9d0ff7b8e8784       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             12 minutes ago      Running             coredns                                  0                   d62d15d11c495       coredns-66bc5c9577-2rqh8
	9d0a2f86b38f4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             12 minutes ago      Running             kube-proxy                               0                   e1fb2fcb1123b       kube-proxy-fbmdq
	80489befa62b8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             12 minutes ago      Running             kube-scheduler                           0                   d4cfa30f1a32a       kube-scheduler-addons-994396
	844d913e662bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             12 minutes ago      Running             etcd                                     0                   e2f739ab181cd       etcd-addons-994396
	35bb45a49c1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             12 minutes ago      Running             kube-controller-manager                  0                   80615bf9878bb       kube-controller-manager-addons-994396
	fdeec4098b47d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             12 minutes ago      Running             kube-apiserver                           0                   f1c88f09470e5       kube-apiserver-addons-994396
	
	
	==> coredns [9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1] <==
	[INFO] 10.244.0.8:35593 - 41258 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000117914s
	[INFO] 10.244.0.8:43909 - 57816 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000160427s
	[INFO] 10.244.0.8:43909 - 48110 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000147434s
	[INFO] 10.244.0.8:43909 - 53266 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000126488s
	[INFO] 10.244.0.8:43909 - 51997 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000149109s
	[INFO] 10.244.0.8:43909 - 11090 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000112538s
	[INFO] 10.244.0.8:43909 - 24083 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000116658s
	[INFO] 10.244.0.8:43909 - 48113 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000088363s
	[INFO] 10.244.0.8:43909 - 33139 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000111775s
	[INFO] 10.244.0.8:49149 - 35026 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000155678s
	[INFO] 10.244.0.8:49149 - 23781 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000192393s
	[INFO] 10.244.0.8:49149 - 56109 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000085389s
	[INFO] 10.244.0.8:49149 - 2176 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00040789s
	[INFO] 10.244.0.8:49149 - 20237 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000089525s
	[INFO] 10.244.0.8:49149 - 14026 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000428044s
	[INFO] 10.244.0.8:49149 - 64308 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00011308s
	[INFO] 10.244.0.8:49149 - 44536 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000174439s
	[INFO] 10.244.0.8:50543 - 14314 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00030393s
	[INFO] 10.244.0.8:50543 - 46371 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000515086s
	[INFO] 10.244.0.8:50543 - 25335 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000115841s
	[INFO] 10.244.0.8:50543 - 48961 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000323469s
	[INFO] 10.244.0.8:50543 - 37392 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00010768s
	[INFO] 10.244.0.8:50543 - 24322 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000210224s
	[INFO] 10.244.0.8:50543 - 60352 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000111171s
	[INFO] 10.244.0.8:50543 - 34628 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000170516s
	
	
	==> describe nodes <==
	Name:               addons-994396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-994396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-994396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-994396
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-994396"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:45:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-994396
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:57:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:56:22 +0000   Sat, 01 Nov 2025 08:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-994396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 47158355a9594cbf84ea23a10000597a
	  System UUID:                47158355-a959-4cbf-84ea-23a10000597a
	  Boot ID:                    8b22796c-545f-4b51-954a-eb39441cd160
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  gadget                      gadget-z8nnd                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9cxnd                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 amd-gpu-device-plugin-vssmp                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-2rqh8                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-7l7ps                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-994396                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-994396                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-994396                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-fbmdq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-994396                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-6b586f9694-b4ph6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 registry-creds-764b6fb674-xstzf                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-bzs78                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-2pbx5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-7d9fbc56b8-jbkmr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-9ghvj                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-994396 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-994396 event: Registered Node addons-994396 in Controller
	
	
	==> dmesg <==
	[  +9.205621] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 08:46] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 08:47] kauditd_printk_skb: 32 callbacks suppressed
	[ +34.333332] kauditd_printk_skb: 101 callbacks suppressed
	[  +3.822306] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.002792] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 1 08:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.240953] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 08:50] kauditd_printk_skb: 17 callbacks suppressed
	[ +34.452421] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:51] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.931610] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 08:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.008516] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.922747] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.151130] kauditd_printk_skb: 37 callbacks suppressed
	[ +11.857033] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 08:54] kauditd_printk_skb: 26 callbacks suppressed
	[ +40.501255] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:55] kauditd_printk_skb: 9 callbacks suppressed
	[Nov 1 08:56] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde] <==
	{"level":"info","ts":"2025-11-01T08:47:54.978149Z","caller":"traceutil/trace.go:172","msg":"trace[879398792] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1248; }","duration":"128.792993ms","start":"2025-11-01T08:47:54.849340Z","end":"2025-11-01T08:47:54.978133Z","steps":["trace[879398792] 'read index received'  (duration: 128.787273ms)","trace[879398792] 'applied index is now lower than readState.Index'  (duration: 4.859µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:47:54.978274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.918573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:47:54.978294Z","caller":"traceutil/trace.go:172","msg":"trace[478888116] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1194; }","duration":"128.951874ms","start":"2025-11-01T08:47:54.849337Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[478888116] 'agreement among raft nodes before linearized reading'  (duration: 128.896473ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:54.978301Z","caller":"traceutil/trace.go:172","msg":"trace[127276739] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"193.938157ms","start":"2025-11-01T08:47:54.784350Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[127276739] 'process raft request'  (duration: 193.811655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:03.807211Z","caller":"traceutil/trace.go:172","msg":"trace[306428088] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"143.076836ms","start":"2025-11-01T08:50:03.664107Z","end":"2025-11-01T08:50:03.807184Z","steps":["trace[306428088] 'process raft request'  (duration: 142.860459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:30.399983Z","caller":"traceutil/trace.go:172","msg":"trace[417490432] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"105.005558ms","start":"2025-11-01T08:50:30.294965Z","end":"2025-11-01T08:50:30.399970Z","steps":["trace[417490432] 'process raft request'  (duration: 104.840267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785305Z","caller":"traceutil/trace.go:172","msg":"trace[446064097] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1675; }","duration":"202.139299ms","start":"2025-11-01T08:51:25.583130Z","end":"2025-11-01T08:51:25.785270Z","steps":["trace[446064097] 'read index received'  (duration: 202.133895ms)","trace[446064097] 'applied index is now lower than readState.Index'  (duration: 4.594µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:51:25.785474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.320618ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:51:25.785498Z","caller":"traceutil/trace.go:172","msg":"trace[2127751376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1576; }","duration":"202.392505ms","start":"2025-11-01T08:51:25.583101Z","end":"2025-11-01T08:51:25.785493Z","steps":["trace[2127751376] 'agreement among raft nodes before linearized reading'  (duration: 202.298341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785518Z","caller":"traceutil/trace.go:172","msg":"trace[25251410] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"230.552599ms","start":"2025-11-01T08:51:25.554955Z","end":"2025-11-01T08:51:25.785507Z","steps":["trace[25251410] 'process raft request'  (duration: 230.448007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027453Z","caller":"traceutil/trace.go:172","msg":"trace[1612683542] linearizableReadLoop","detail":"{readStateIndex:1872; appliedIndex:1872; }","duration":"169.871386ms","start":"2025-11-01T08:52:17.857553Z","end":"2025-11-01T08:52:18.027424Z","steps":["trace[1612683542] 'read index received'  (duration: 169.865757ms)","trace[1612683542] 'applied index is now lower than readState.Index'  (duration: 4.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:18.027601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.004057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:18.027618Z","caller":"traceutil/trace.go:172","msg":"trace[354966435] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1760; }","duration":"170.064613ms","start":"2025-11-01T08:52:17.857549Z","end":"2025-11-01T08:52:18.027613Z","steps":["trace[354966435] 'agreement among raft nodes before linearized reading'  (duration: 169.976661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027617Z","caller":"traceutil/trace.go:172","msg":"trace[182557049] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1761; }","duration":"175.595316ms","start":"2025-11-01T08:52:17.852012Z","end":"2025-11-01T08:52:18.027607Z","steps":["trace[182557049] 'process raft request'  (duration: 175.503416ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.484737Z","caller":"traceutil/trace.go:172","msg":"trace[1326759402] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1904; }","duration":"340.503004ms","start":"2025-11-01T08:52:23.144214Z","end":"2025-11-01T08:52:23.484717Z","steps":["trace[1326759402] 'read index received'  (duration: 340.496208ms)","trace[1326759402] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:23.485008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.771395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2025-11-01T08:52:23.485058Z","caller":"traceutil/trace.go:172","msg":"trace[1039449345] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1790; }","duration":"340.841883ms","start":"2025-11-01T08:52:23.144209Z","end":"2025-11-01T08:52:23.485051Z","steps":["trace[1039449345] 'agreement among raft nodes before linearized reading'  (duration: 340.62868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485106Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.144193Z","time spent":"340.902265ms","remote":"127.0.0.1:36552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T08:52:23.485553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.574901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:23.485588Z","caller":"traceutil/trace.go:172","msg":"trace[1585287071] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1791; }","duration":"287.617514ms","start":"2025-11-01T08:52:23.197963Z","end":"2025-11-01T08:52:23.485581Z","steps":["trace[1585287071] 'agreement among raft nodes before linearized reading'  (duration: 287.549253ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.485660Z","caller":"traceutil/trace.go:172","msg":"trace[1103263823] transaction","detail":"{read_only:false; response_revision:1791; number_of_response:1; }","duration":"361.459988ms","start":"2025-11-01T08:52:23.124191Z","end":"2025-11-01T08:52:23.485651Z","steps":["trace[1103263823] 'process raft request'  (duration: 361.180443ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485795Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.124175Z","time spent":"361.507625ms","remote":"127.0.0.1:36760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-11-01T08:55:13.580313Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1434}
	{"level":"info","ts":"2025-11-01T08:55:13.648379Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1434,"took":"67.304726ms","hash":2547452093,"current-db-size-bytes":5730304,"current-db-size":"5.7 MB","current-db-size-in-use-bytes":3653632,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-11-01T08:55:13.648498Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2547452093,"revision":1434,"compact-revision":-1}
	
	
	==> kernel <==
	 08:57:26 up 12 min,  0 users,  load average: 0.13, 0.44, 0.40
	Linux addons-994396 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25] <==
	W1101 08:46:31.751759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.751828       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:46:31.751848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:46:31.752853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.752966       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:46:31.753020       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:48:03.292013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	W1101 08:48:03.296407       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:48:03.296747       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:48:03.297742       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	E1101 08:48:03.298496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	I1101 08:48:03.353240       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:52:03.525330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42910: use of closed network connection
	E1101 08:52:03.723785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42940: use of closed network connection
	I1101 08:52:12.984624       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.226.149"}
	I1101 08:53:04.341444       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 08:55:15.302985       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 08:56:08.891135       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:56:09.140799       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.237.168"}
	
	
	==> kube-controller-manager [35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8] <==
	E1101 08:46:22.433268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:22.496038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:46:52.438789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:52.504482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:22.446493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:22.515370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:52.452536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:52.535721       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:52:17.008825       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 08:52:35.860282       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E1101 08:54:57.714310       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.738576       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.766801       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.805443       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.865423       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:57.962606       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.138236       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:58.477214       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:54:59.131849       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:00.430311       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:03.008821       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:08.147281       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:18.405556       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1101 08:55:27.269224       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace yakd-dashboard failed: failed to delete pods for namespace: yakd-dashboard, err: unexpected items still remain in namespace: yakd-dashboard for gvr: /v1, Resource=pods" logger="UnhandledError"
	I1101 08:56:13.507559       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9] <==
	I1101 08:45:24.962819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:45:25.066839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:45:25.068064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1101 08:45:25.073313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:45:25.410848       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:45:25.410962       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:45:25.410991       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:45:25.477946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:45:25.478244       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:45:25.478277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:45:25.484125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:45:25.484405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:45:25.491275       1 config.go:200] "Starting service config controller"
	I1101 08:45:25.491309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:45:25.494813       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:45:25.496161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:45:25.495379       1 config.go:309] "Starting node config controller"
	I1101 08:45:25.506423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:45:25.506433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:45:25.584706       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:45:25.592170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:45:25.598016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5] <==
	E1101 08:45:15.349464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:15.349542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:15.349728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:45:15.349881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:45:15.352076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:15.352119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:15.352139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:15.352358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:45:15.352409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:15.357367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:15.357513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:15.357652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:45:16.203110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:45:16.263373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:16.299073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:16.424658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:16.486112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:16.556670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:16.568573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:16.598275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:45:16.651957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:16.662617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:45:16.674245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:16.759792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:45:19.143863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:56:32 addons-994396 kubelet[1497]: E1101 08:56:32.972374    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-b4ph6" podUID="f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c"
	Nov 01 08:56:38 addons-994396 kubelet[1497]: E1101 08:56:38.385273    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987398384821407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:38 addons-994396 kubelet[1497]: E1101 08:56:38.385300    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987398384821407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:38 addons-994396 kubelet[1497]: E1101 08:56:38.969523    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:56:43 addons-994396 kubelet[1497]: I1101 08:56:43.970307    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-b4ph6" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:43 addons-994396 kubelet[1497]: E1101 08:56:43.972549    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-b4ph6" podUID="f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c"
	Nov 01 08:56:48 addons-994396 kubelet[1497]: E1101 08:56:48.388677    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987408387834428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:48 addons-994396 kubelet[1497]: E1101 08:56:48.388720    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987408387834428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:50 addons-994396 kubelet[1497]: I1101 08:56:50.969743    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bzs78" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:54 addons-994396 kubelet[1497]: I1101 08:56:54.970094    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-b4ph6" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:54 addons-994396 kubelet[1497]: E1101 08:56:54.973500    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/registry:3.0.0@sha256:cd92709b4191c5779cd7215ccd695db6c54652e7a62843197e367427efb84d0e\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a in docker.io/library/registry: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/registry-6b586f9694-b4ph6" podUID="f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c"
	Nov 01 08:56:56 addons-994396 kubelet[1497]: E1101 08:56:56.719977    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 01 08:56:56 addons-994396 kubelet[1497]: E1101 08:56:56.720092    1497 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 01 08:56:56 addons-994396 kubelet[1497]: E1101 08:56:56.720485    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e_local-path-storage(e25da403-345f-40f6-b6f9-e28731089dd6): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:56:56 addons-994396 kubelet[1497]: E1101 08:56:56.720524    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" podUID="e25da403-345f-40f6-b6f9-e28731089dd6"
	Nov 01 08:56:56 addons-994396 kubelet[1497]: E1101 08:56:56.961535    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" podUID="e25da403-345f-40f6-b6f9-e28731089dd6"
	Nov 01 08:56:58 addons-994396 kubelet[1497]: E1101 08:56:58.392063    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987418391644058  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:58 addons-994396 kubelet[1497]: E1101 08:56:58.392113    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987418391644058  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:56:59 addons-994396 kubelet[1497]: I1101 08:56:59.970830    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vssmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:57:08 addons-994396 kubelet[1497]: E1101 08:57:08.394833    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987428394169663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:08 addons-994396 kubelet[1497]: E1101 08:57:08.394969    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987428394169663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:09 addons-994396 kubelet[1497]: I1101 08:57:09.972555    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-b4ph6" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:57:18 addons-994396 kubelet[1497]: E1101 08:57:18.398239    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987438397647220  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:18 addons-994396 kubelet[1497]: E1101 08:57:18.398289    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987438397647220  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:57:23 addons-994396 kubelet[1497]: I1101 08:57:23.971398    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3] <==
	W1101 08:57:01.053135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:03.057079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:03.064607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:05.068402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:05.073837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:07.077735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:07.086003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:09.089036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:09.097506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:11.101156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:11.106759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:13.111115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:13.119826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:15.123462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:15.129416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:17.134800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:17.150805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:19.155194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:19.163097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:21.167147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:21.172745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:23.177678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:23.185006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:25.190565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:57:25.198545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
helpers_test.go:269: (dbg) Run:  kubectl --context addons-994396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1 (100.15853ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:56:09 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlw58 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rlw58:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  78s   default-scheduler  Successfully assigned default/nginx to addons-994396
	  Normal   Pulling    78s   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     1s    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     1s    kubelet            Error: ErrImagePull
	  Normal   BackOff    0s    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     0s    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mngk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m43s                default-scheduler  Successfully assigned default/task-pv-pod to addons-994396
	  Warning  Failed     4m1s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     61s (x2 over 4m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     61s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    49s (x2 over 4m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     49s (x2 over 4m1s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    35s (x3 over 4m43s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65r97 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-65r97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6ptqs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dmt9r" not found
	Error from server (NotFound): pods "registry-6b586f9694-b4ph6" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xstzf" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-994396 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (303.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (236.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-j8882" [0077b05b-14cd-445f-9783-8883fbae27e5] Pending / Ready:ContainersNotReady (containers with unready status: [yakd]) / ContainersReady:ContainersNotReady (containers with unready status: [yakd])
helpers_test.go:337: TestAddons/parallel/Yakd: WARNING: pod list for "yakd-dashboard" "app.kubernetes.io/name=yakd-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:1047: ***** TestAddons/parallel/Yakd: pod "app.kubernetes.io/name=yakd-dashboard" failed to start within 2m0s: context deadline exceeded ****
addons_test.go:1047: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
addons_test.go:1047: TestAddons/parallel/Yakd: showing logs for failed pods as of 2025-11-01 08:54:12.398059722 +0000 UTC m=+587.424736959
addons_test.go:1047: (dbg) Run:  kubectl --context addons-994396 describe po yakd-dashboard-5ff678cb9-j8882 -n yakd-dashboard
addons_test.go:1047: (dbg) kubectl --context addons-994396 describe po yakd-dashboard-5ff678cb9-j8882 -n yakd-dashboard:
Name:             yakd-dashboard-5ff678cb9-j8882
Namespace:        yakd-dashboard
Priority:         0
Service Account:  yakd-dashboard
Node:             addons-994396/192.168.39.195
Start Time:       Sat, 01 Nov 2025 08:45:32 +0000
Labels:           app.kubernetes.io/instance=yakd-dashboard
app.kubernetes.io/name=yakd-dashboard
gcp-auth-skip-secret=true
pod-template-hash=5ff678cb9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/yakd-dashboard-5ff678cb9
Containers:
yakd:
Container ID:   
Image:          docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624
Image ID:       
Port:           8080/TCP (http)
Host Port:      0/TCP (http)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
memory:  256Mi
Requests:
memory:   128Mi
Liveness:   http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness:  http-get http://:8080/ delay=10s timeout=10s period=10s #success=1 #failure=3
Environment:
KUBERNETES_NAMESPACE:  yakd-dashboard (v1:metadata.namespace)
HOSTNAME:              yakd-dashboard-5ff678cb9-j8882 (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2cgj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l2cgj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m40s                  default-scheduler  Successfully assigned yakd-dashboard/yakd-dashboard-5ff678cb9-j8882 to addons-994396
Warning  Failed     3m29s                  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": fetching target platform image selected from image index: reading manifest sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m17s (x3 over 6m33s)  kubelet            Failed to pull image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624": reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m17s (x4 over 6m33s)  kubelet            Error: ErrImagePull
Normal   BackOff    61s (x10 over 6m32s)   kubelet            Back-off pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
Warning  Failed     61s (x10 over 6m32s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    49s (x5 over 8m36s)    kubelet            Pulling image "docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624"
addons_test.go:1047: (dbg) Run:  kubectl --context addons-994396 logs yakd-dashboard-5ff678cb9-j8882 -n yakd-dashboard
addons_test.go:1047: (dbg) Non-zero exit: kubectl --context addons-994396 logs yakd-dashboard-5ff678cb9-j8882 -n yakd-dashboard: exit status 1 (76.930473ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "yakd" in pod "yakd-dashboard-5ff678cb9-j8882" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:1047: kubectl --context addons-994396 logs yakd-dashboard-5ff678cb9-j8882 -n yakd-dashboard: exit status 1
addons_test.go:1048: failed waiting for YAKD - Kubernetes Dashboard pod: app.kubernetes.io/name=yakd-dashboard within 2m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Yakd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-994396 -n addons-994396
helpers_test.go:252: <<< TestAddons/parallel/Yakd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Yakd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 logs -n 25: (1.561481577s)
helpers_test.go:260: TestAddons/parallel/Yakd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-664461                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ -p binary-mirror-775538                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-775538 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ addons  │ enable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-994396                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ start   │ -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:51 UTC │ 01 Nov 25 08:51 UTC │
	│ addons  │ addons-994396 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ enable headlamp -p addons-994396 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	│ addons  │ addons-994396 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-994396        │ jenkins │ v1.37.0 │ 01 Nov 25 08:52 UTC │ 01 Nov 25 08:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:38.415244  535088 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:38.415511  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415520  535088 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:38.415525  535088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:38.415722  535088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:38.416292  535088 out.go:368] Setting JSON to false
	I1101 08:44:38.417206  535088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62800,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:38.417275  535088 start.go:143] virtualization: kvm guest
	I1101 08:44:38.419180  535088 out.go:179] * [addons-994396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:38.420576  535088 notify.go:221] Checking for updates...
	I1101 08:44:38.420602  535088 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:44:38.422388  535088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:38.423762  535088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:38.425054  535088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:38.426433  535088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:44:38.427613  535088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:44:38.429086  535088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:38.459669  535088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:44:38.460716  535088 start.go:309] selected driver: kvm2
	I1101 08:44:38.460736  535088 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:38.460750  535088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:44:38.461509  535088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:38.461750  535088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:44:38.461788  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:44:38.461839  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:38.461847  535088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:38.461887  535088 start.go:353] cluster config:
	{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:44:38.462012  535088 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:38.463350  535088 out.go:179] * Starting "addons-994396" primary control-plane node in "addons-994396" cluster
	I1101 08:44:38.464523  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:44:38.464559  535088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:38.464570  535088 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:38.464648  535088 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:44:38.464659  535088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:44:38.464982  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:38.465015  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json: {Name:mk89a75531523cc17e10cf65ac144e466baef6b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:44:38.465175  535088 start.go:360] acquireMachinesLock for addons-994396: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:44:38.465227  535088 start.go:364] duration metric: took 38.791µs to acquireMachinesLock for "addons-994396"
	I1101 08:44:38.465244  535088 start.go:93] Provisioning new machine with config: &{Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:44:38.465309  535088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:44:38.467651  535088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:44:38.467824  535088 start.go:159] libmachine.API.Create for "addons-994396" (driver="kvm2")
	I1101 08:44:38.467852  535088 client.go:173] LocalClient.Create starting
	I1101 08:44:38.467960  535088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 08:44:38.525135  535088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 08:44:38.966403  535088 main.go:143] libmachine: creating domain...
	I1101 08:44:38.966427  535088 main.go:143] libmachine: creating network...
	I1101 08:44:38.968049  535088 main.go:143] libmachine: found existing default network
	I1101 08:44:38.968268  535088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.968754  535088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b9b7d0}
	I1101 08:44:38.968919  535088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-994396</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:38.974811  535088 main.go:143] libmachine: creating private network mk-addons-994396 192.168.39.0/24...
	I1101 08:44:39.051181  535088 main.go:143] libmachine: private network mk-addons-994396 192.168.39.0/24 created
	I1101 08:44:39.051459  535088 main.go:143] libmachine: <network>
	  <name>mk-addons-994396</name>
	  <uuid>960ab3a9-e2ba-413f-8b77-ff4745b036d0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:3e:a3:01'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:44:39.051486  535088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.051511  535088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:39.051536  535088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.051601  535088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:44:39.334278  535088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa...
	I1101 08:44:39.562590  535088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk...
	I1101 08:44:39.562642  535088 main.go:143] libmachine: Writing magic tar header
	I1101 08:44:39.562674  535088 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:44:39.562773  535088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 ...
	I1101 08:44:39.562837  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396
	I1101 08:44:39.562920  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396 (perms=drwx------)
	I1101 08:44:39.562944  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 08:44:39.562958  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:44:39.562977  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:39.562988  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 08:44:39.562999  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 08:44:39.563010  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 08:44:39.563022  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:44:39.563032  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:44:39.563043  535088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:44:39.563053  535088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:44:39.563063  535088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:44:39.563072  535088 main.go:143] libmachine: skipping /home - not owner
	I1101 08:44:39.563079  535088 main.go:143] libmachine: defining domain...
	I1101 08:44:39.564528  535088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:39.569846  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:73:0a:92 in network default
	I1101 08:44:39.570479  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:39.570497  535088 main.go:143] libmachine: starting domain...
	I1101 08:44:39.570501  535088 main.go:143] libmachine: ensuring networks are active...
	I1101 08:44:39.571361  535088 main.go:143] libmachine: Ensuring network default is active
	I1101 08:44:39.571760  535088 main.go:143] libmachine: Ensuring network mk-addons-994396 is active
	I1101 08:44:39.572463  535088 main.go:143] libmachine: getting domain XML...
	I1101 08:44:39.574016  535088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-994396</name>
	  <uuid>47158355-a959-4cbf-84ea-23a10000597a</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/addons-994396.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2a:d2:e3'/>
	      <source network='mk-addons-994396'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:73:0a:92'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:44:40.850976  535088 main.go:143] libmachine: waiting for domain to start...
	I1101 08:44:40.852401  535088 main.go:143] libmachine: domain is now running
	I1101 08:44:40.852417  535088 main.go:143] libmachine: waiting for IP...
	I1101 08:44:40.853195  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:40.853985  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:40.853994  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:40.854261  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:40.854309  535088 retry.go:31] will retry after 216.262446ms: waiting for domain to come up
	I1101 08:44:41.071837  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.072843  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.072862  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.073274  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.073319  535088 retry.go:31] will retry after 360.302211ms: waiting for domain to come up
	I1101 08:44:41.434879  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.435804  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.435822  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.436172  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.436214  535088 retry.go:31] will retry after 371.777554ms: waiting for domain to come up
	I1101 08:44:41.809947  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:41.810703  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:41.810722  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:41.811072  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:41.811112  535088 retry.go:31] will retry after 462.843758ms: waiting for domain to come up
	I1101 08:44:42.275984  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.276618  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.276637  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.276993  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.277037  535088 retry.go:31] will retry after 560.265466ms: waiting for domain to come up
	I1101 08:44:42.838931  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:42.839781  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:42.839798  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:42.840224  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:42.840268  535088 retry.go:31] will retry after 839.411139ms: waiting for domain to come up
	I1101 08:44:43.681040  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:43.681790  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:43.681802  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:43.682192  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:43.682243  535088 retry.go:31] will retry after 1.099878288s: waiting for domain to come up
	I1101 08:44:44.783686  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:44.784502  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:44.784521  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:44.784840  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:44.784888  535088 retry.go:31] will retry after 1.052374717s: waiting for domain to come up
	I1101 08:44:45.839257  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:45.839889  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:45.839926  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:45.840243  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:45.840284  535088 retry.go:31] will retry after 1.704542625s: waiting for domain to come up
	I1101 08:44:47.547411  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:47.548205  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:47.548225  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:47.548588  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:47.548630  535088 retry.go:31] will retry after 1.752267255s: waiting for domain to come up
	I1101 08:44:49.302359  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:49.303199  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:49.303210  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:49.303522  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:49.303559  535088 retry.go:31] will retry after 2.861627149s: waiting for domain to come up
	I1101 08:44:52.168696  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:52.169368  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:52.169385  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:52.169681  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:52.169738  535088 retry.go:31] will retry after 2.277819072s: waiting for domain to come up
	I1101 08:44:54.449193  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:54.449957  535088 main.go:143] libmachine: no network interface addresses found for domain addons-994396 (source=lease)
	I1101 08:44:54.449978  535088 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:44:54.450273  535088 main.go:143] libmachine: unable to find current IP address of domain addons-994396 in network mk-addons-994396 (interfaces detected: [])
	I1101 08:44:54.450316  535088 retry.go:31] will retry after 3.87405165s: waiting for domain to come up
	I1101 08:44:58.329388  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330073  535088 main.go:143] libmachine: domain addons-994396 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.330089  535088 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1101 08:44:58.330096  535088 main.go:143] libmachine: reserving static IP address...
	I1101 08:44:58.330490  535088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-994396", mac: "52:54:00:2a:d2:e3", ip: "192.168.39.195"} in network mk-addons-994396
	I1101 08:44:58.532247  535088 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain addons-994396
	I1101 08:44:58.532270  535088 main.go:143] libmachine: waiting for SSH...
	I1101 08:44:58.532276  535088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:44:58.535646  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536214  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.536242  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.536445  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.536737  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.536748  535088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:44:58.655800  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:58.656194  535088 main.go:143] libmachine: domain creation complete
	I1101 08:44:58.657668  535088 machine.go:94] provisionDockerMachine start ...
	I1101 08:44:58.660444  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.660857  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.660881  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.661078  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.661273  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.661283  535088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:44:58.781217  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:44:58.781253  535088 buildroot.go:166] provisioning hostname "addons-994396"
	I1101 08:44:58.784387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784787  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.784821  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.784992  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.785186  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.785198  535088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-994396 && echo "addons-994396" | sudo tee /etc/hostname
	I1101 08:44:58.921865  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-994396
	
	I1101 08:44:58.924651  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925106  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:58.925158  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:58.925363  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:58.925623  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:58.925647  535088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-994396' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-994396/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-994396' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:44:59.053021  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:44:59.053062  535088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 08:44:59.053121  535088 buildroot.go:174] setting up certificates
	I1101 08:44:59.053134  535088 provision.go:84] configureAuth start
	I1101 08:44:59.056039  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.056491  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.056527  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059390  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059768  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.059793  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.059971  535088 provision.go:143] copyHostCerts
	I1101 08:44:59.060039  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 08:44:59.060157  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 08:44:59.060215  535088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 08:44:59.060262  535088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.addons-994396 san=[127.0.0.1 192.168.39.195 addons-994396 localhost minikube]
	I1101 08:44:59.098818  535088 provision.go:177] copyRemoteCerts
	I1101 08:44:59.098909  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:44:59.101492  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.101853  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.101876  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.102044  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.192919  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 08:44:59.224374  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:44:59.254587  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 08:44:59.285112  535088 provision.go:87] duration metric: took 231.963697ms to configureAuth
	I1101 08:44:59.285151  535088 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:44:59.285333  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:44:59.288033  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288440  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.288461  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.288660  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.288854  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.288872  535088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:44:59.552498  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:44:59.552535  535088 machine.go:97] duration metric: took 894.848438ms to provisionDockerMachine
	I1101 08:44:59.552551  535088 client.go:176] duration metric: took 21.084691653s to LocalClient.Create
	I1101 08:44:59.552575  535088 start.go:167] duration metric: took 21.084749844s to libmachine.API.Create "addons-994396"
	I1101 08:44:59.552585  535088 start.go:293] postStartSetup for "addons-994396" (driver="kvm2")
	I1101 08:44:59.552598  535088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:44:59.552698  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:44:59.555985  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556410  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.556446  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.556594  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.646378  535088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:44:59.651827  535088 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:44:59.651860  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 08:44:59.652002  535088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 08:44:59.652045  535088 start.go:296] duration metric: took 99.451778ms for postStartSetup
	I1101 08:44:59.655428  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.655951  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.655983  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.656303  535088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/config.json ...
	I1101 08:44:59.656524  535088 start.go:128] duration metric: took 21.191204758s to createHost
	I1101 08:44:59.659225  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659662  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.659688  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.659918  535088 main.go:143] libmachine: Using SSH client type: native
	I1101 08:44:59.660165  535088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1101 08:44:59.660179  535088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:44:59.778959  535088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761986699.744832808
	
	I1101 08:44:59.778992  535088 fix.go:216] guest clock: 1761986699.744832808
	I1101 08:44:59.779003  535088 fix.go:229] Guest: 2025-11-01 08:44:59.744832808 +0000 UTC Remote: 2025-11-01 08:44:59.656538269 +0000 UTC m=+21.291332648 (delta=88.294539ms)
	I1101 08:44:59.779025  535088 fix.go:200] guest clock delta is within tolerance: 88.294539ms
	I1101 08:44:59.779033  535088 start.go:83] releasing machines lock for "addons-994396", held for 21.31379566s
	I1101 08:44:59.782561  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783052  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.783085  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.783744  535088 ssh_runner.go:195] Run: cat /version.json
	I1101 08:44:59.783923  535088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:44:59.786949  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787338  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.787364  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787467  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.787547  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.788054  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:44:59.788100  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:44:59.788306  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:44:59.898855  535088 ssh_runner.go:195] Run: systemctl --version
	I1101 08:44:59.905749  535088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:45:00.064091  535088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:45:00.072201  535088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:45:00.072263  535088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:45:00.092562  535088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:45:00.092584  535088 start.go:496] detecting cgroup driver to use...
	I1101 08:45:00.092661  535088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:45:00.112010  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:45:00.129164  535088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:45:00.129222  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:45:00.147169  535088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:45:00.164876  535088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:45:00.317011  535088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:45:00.521291  535088 docker.go:234] disabling docker service ...
	I1101 08:45:00.521377  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:45:00.537927  535088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:45:00.552544  535088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:45:00.714401  535088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:45:00.855387  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:45:00.871802  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:45:00.895848  535088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:45:00.895969  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.908735  535088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:45:00.908831  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.924244  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.938467  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.951396  535088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:45:00.965054  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.977595  535088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:00.998868  535088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:45:01.011547  535088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:45:01.022709  535088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:45:01.022775  535088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:45:01.044963  535088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:45:01.057499  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:01.203336  535088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:45:01.311792  535088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:45:01.311884  535088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:45:01.317453  535088 start.go:564] Will wait 60s for crictl version
	I1101 08:45:01.317538  535088 ssh_runner.go:195] Run: which crictl
	I1101 08:45:01.321986  535088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:45:01.367266  535088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:45:01.367363  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.398127  535088 ssh_runner.go:195] Run: crio --version
	I1101 08:45:01.431424  535088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:45:01.435939  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436441  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:01.436471  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:01.436732  535088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:45:01.441662  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:01.457635  535088 kubeadm.go:884] updating cluster {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:45:01.457753  535088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:45:01.457802  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:01.495090  535088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:45:01.495193  535088 ssh_runner.go:195] Run: which lz4
	I1101 08:45:01.500348  535088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:45:01.506036  535088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:45:01.506082  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:45:03.083875  535088 crio.go:462] duration metric: took 1.583585669s to copy over tarball
	I1101 08:45:03.084036  535088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:45:04.665932  535088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.581842537s)
	I1101 08:45:04.665965  535088 crio.go:469] duration metric: took 1.582007439s to extract the tarball
	I1101 08:45:04.665976  535088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:45:04.707682  535088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:45:04.751036  535088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:45:04.751073  535088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:45:04.751085  535088 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.1 crio true true} ...
	I1101 08:45:04.751212  535088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-994396 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:45:04.751302  535088 ssh_runner.go:195] Run: crio config
	I1101 08:45:04.801702  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:04.801733  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:04.801758  535088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:45:04.801791  535088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-994396 NodeName:addons-994396 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:45:04.801978  535088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-994396"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:45:04.802066  535088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:45:04.814571  535088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:45:04.814653  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:45:04.826605  535088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:45:04.846937  535088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:45:04.868213  535088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:45:04.888962  535088 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1101 08:45:04.893299  535088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:45:04.908547  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:05.049704  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:05.081089  535088 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396 for IP: 192.168.39.195
	I1101 08:45:05.081124  535088 certs.go:195] generating shared ca certs ...
	I1101 08:45:05.081146  535088 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.081312  535088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 08:45:05.135626  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt ...
	I1101 08:45:05.135669  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt: {Name:mk42d9a91568201fc7bb838317bb109a9d557e4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.135920  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key ...
	I1101 08:45:05.135935  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key: {Name:mk8868035ca874da4b6bcd8361c76f97522a09dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.136031  535088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 08:45:05.223112  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt ...
	I1101 08:45:05.223159  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt: {Name:mk17c24c1e5b8188202459729e4a5c2f9a4008a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223343  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key ...
	I1101 08:45:05.223356  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key: {Name:mk64bb220f00b339bafb0b18442258c31c6af7ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.223432  535088 certs.go:257] generating profile certs ...
	I1101 08:45:05.223509  535088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key
	I1101 08:45:05.223524  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt with IP's: []
	I1101 08:45:05.791770  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt ...
	I1101 08:45:05.791805  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: {Name:mk739df015c10897beee55b57aac6a9687c49aee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.791993  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key ...
	I1101 08:45:05.792008  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.key: {Name:mk22e303787fbf3b8945b47ac917db338129138f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.792086  535088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58
	I1101 08:45:05.792105  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I1101 08:45:05.964688  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 ...
	I1101 08:45:05.964721  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58: {Name:mkc85c65639cbe37cb2f18c20238504fe651c568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964892  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 ...
	I1101 08:45:05.964917  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58: {Name:mk0a07f1288d6c9ced8ef2d4bb53cbfce6f3c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:05.964998  535088 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt
	I1101 08:45:05.965075  535088 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key.2a971b58 -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key
	I1101 08:45:05.965124  535088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key
	I1101 08:45:05.965142  535088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt with IP's: []
	I1101 08:45:06.097161  535088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt ...
	I1101 08:45:06.097197  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt: {Name:mke456d45c85355b327c605777e7e939bd178f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097374  535088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key ...
	I1101 08:45:06.097388  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key: {Name:mk96b8f9598bf40057b4d6b2c6e97a30a363b3bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:06.097558  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:45:06.097602  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 08:45:06.097627  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:45:06.097651  535088 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 08:45:06.098363  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:45:06.130486  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:45:06.160429  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:45:06.189962  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:45:06.219452  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:45:06.250552  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:45:06.282860  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:45:06.313986  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 08:45:06.344383  535088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:45:06.376611  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:45:06.399751  535088 ssh_runner.go:195] Run: openssl version
	I1101 08:45:06.406933  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:45:06.421716  535088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427410  535088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.427478  535088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:45:06.435363  535088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:45:06.449854  535088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:45:06.455299  535088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:45:06.455368  535088 kubeadm.go:401] StartCluster: {Name:addons-994396 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-994396 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:45:06.455464  535088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:45:06.455528  535088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:45:06.499318  535088 cri.go:89] found id: ""
	I1101 08:45:06.499395  535088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:45:06.513696  535088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:45:06.527370  535088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:45:06.541099  535088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:45:06.541122  535088 kubeadm.go:158] found existing configuration files:
	
	I1101 08:45:06.541170  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:45:06.553610  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:45:06.553677  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:45:06.567384  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:45:06.580377  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:45:06.580444  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:45:06.593440  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.605393  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:45:06.605460  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:45:06.618978  535088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:45:06.631411  535088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:45:06.631487  535088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:45:06.645452  535088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:45:06.719122  535088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:45:06.719190  535088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:45:06.829004  535088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:45:06.829160  535088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:45:06.829291  535088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:45:06.841691  535088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:45:06.866137  535088 out.go:252]   - Generating certificates and keys ...
	I1101 08:45:06.866269  535088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:45:06.866364  535088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:45:07.164883  535088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:45:07.767615  535088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:45:08.072088  535088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:45:08.514870  535088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:45:08.646331  535088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:45:08.646504  535088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.781122  535088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:45:08.781335  535088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-994396 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I1101 08:45:08.899420  535088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:45:09.007181  535088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:45:09.224150  535088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:45:09.224224  535088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:45:09.511033  535088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:45:09.752693  535088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:45:09.819463  535088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:45:10.005082  535088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:45:10.463552  535088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:45:10.464025  535088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:45:10.466454  535088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:45:10.471575  535088 out.go:252]   - Booting up control plane ...
	I1101 08:45:10.471714  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:45:10.471809  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:45:10.471913  535088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:45:10.490781  535088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:45:10.491002  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:45:10.498306  535088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:45:10.498812  535088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:45:10.498893  535088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:45:10.686796  535088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:45:10.686991  535088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:45:11.697343  535088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.005207328s
	I1101 08:45:11.699752  535088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:45:11.699949  535088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.195:8443/livez
	I1101 08:45:11.700150  535088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:45:11.704134  535088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:45:13.981077  535088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.280860487s
	I1101 08:45:15.371368  535088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.67283221s
	I1101 08:45:17.198417  535088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501722237s
	I1101 08:45:17.211581  535088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:45:17.231075  535088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:45:17.253882  535088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:45:17.254137  535088 kubeadm.go:319] [mark-control-plane] Marking the node addons-994396 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:45:17.268868  535088 kubeadm.go:319] [bootstrap-token] Using token: f9fr0l.j77e5jevkskl9xb5
	I1101 08:45:17.270121  535088 out.go:252]   - Configuring RBAC rules ...
	I1101 08:45:17.270326  535088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:45:17.277792  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:45:17.293695  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:45:17.296955  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:45:17.300284  535088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:45:17.303890  535088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:45:17.605222  535088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:45:18.065761  535088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:45:18.604676  535088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:45:18.605674  535088 kubeadm.go:319] 
	I1101 08:45:18.605802  535088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:45:18.605830  535088 kubeadm.go:319] 
	I1101 08:45:18.605992  535088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:45:18.606023  535088 kubeadm.go:319] 
	I1101 08:45:18.606068  535088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:45:18.606156  535088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:45:18.606234  535088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:45:18.606243  535088 kubeadm.go:319] 
	I1101 08:45:18.606321  535088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:45:18.606330  535088 kubeadm.go:319] 
	I1101 08:45:18.606402  535088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:45:18.606415  535088 kubeadm.go:319] 
	I1101 08:45:18.606489  535088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:45:18.606605  535088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:45:18.606702  535088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:45:18.606712  535088 kubeadm.go:319] 
	I1101 08:45:18.606815  535088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:45:18.606947  535088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:45:18.606965  535088 kubeadm.go:319] 
	I1101 08:45:18.607067  535088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607196  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 08:45:18.607233  535088 kubeadm.go:319] 	--control-plane 
	I1101 08:45:18.607244  535088 kubeadm.go:319] 
	I1101 08:45:18.607366  535088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:45:18.607389  535088 kubeadm.go:319] 
	I1101 08:45:18.607497  535088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f9fr0l.j77e5jevkskl9xb5 \
	I1101 08:45:18.607642  535088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 08:45:18.609590  535088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:45:18.609615  535088 cni.go:84] Creating CNI manager for ""
	I1101 08:45:18.609625  535088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:45:18.611467  535088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:45:18.612559  535088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:45:18.629659  535088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:45:18.653188  535088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:45:18.653266  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:18.653283  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-994396 minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-994396 minikube.k8s.io/primary=true
	I1101 08:45:18.823964  535088 ops.go:34] apiserver oom_adj: -16
	I1101 08:45:18.824003  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.324429  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:19.824169  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.324357  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:20.825065  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.324643  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:21.824929  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.325055  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:22.824179  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.324346  535088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:45:23.422037  535088 kubeadm.go:1114] duration metric: took 4.768840437s to wait for elevateKubeSystemPrivileges
	I1101 08:45:23.422092  535088 kubeadm.go:403] duration metric: took 16.966730014s to StartCluster
	I1101 08:45:23.422117  535088 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.422289  535088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:45:23.422848  535088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:45:23.423145  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:45:23.423170  535088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:45:23.423239  535088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:45:23.423378  535088 addons.go:70] Setting yakd=true in profile "addons-994396"
	I1101 08:45:23.423402  535088 addons.go:239] Setting addon yakd=true in "addons-994396"
	I1101 08:45:23.423420  535088 addons.go:70] Setting inspektor-gadget=true in profile "addons-994396"
	I1101 08:45:23.423440  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.423457  535088 addons.go:239] Setting addon inspektor-gadget=true in "addons-994396"
	I1101 08:45:23.423459  535088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423473  535088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-994396"
	I1101 08:45:23.423435  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423491  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423507  535088 addons.go:70] Setting registry=true in profile "addons-994396"
	I1101 08:45:23.423518  535088 addons.go:239] Setting addon registry=true in "addons-994396"
	I1101 08:45:23.423522  535088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-994396"
	I1101 08:45:23.423539  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423555  535088 addons.go:70] Setting cloud-spanner=true in profile "addons-994396"
	I1101 08:45:23.423568  535088 addons.go:239] Setting addon cloud-spanner=true in "addons-994396"
	I1101 08:45:23.423606  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423731  535088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-994396"
	I1101 08:45:23.423760  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-994396"
	I1101 08:45:23.424125  535088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-994396"
	I1101 08:45:23.424214  535088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:23.424248  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423443  535088 addons.go:70] Setting metrics-server=true in profile "addons-994396"
	I1101 08:45:23.424283  535088 addons.go:239] Setting addon metrics-server=true in "addons-994396"
	I1101 08:45:23.424313  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423545  535088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-994396"
	I1101 08:45:23.424411  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424496  535088 addons.go:70] Setting ingress=true in profile "addons-994396"
	I1101 08:45:23.423498  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.424512  535088 addons.go:239] Setting addon ingress=true in "addons-994396"
	I1101 08:45:23.424544  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425045  535088 addons.go:70] Setting registry-creds=true in profile "addons-994396"
	I1101 08:45:23.425074  535088 addons.go:239] Setting addon registry-creds=true in "addons-994396"
	I1101 08:45:23.425105  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425174  535088 addons.go:70] Setting volcano=true in profile "addons-994396"
	I1101 08:45:23.425210  535088 addons.go:239] Setting addon volcano=true in "addons-994396"
	I1101 08:45:23.425245  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.423474  535088 addons.go:70] Setting default-storageclass=true in profile "addons-994396"
	I1101 08:45:23.425528  535088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-994396"
	I1101 08:45:23.425555  535088 addons.go:70] Setting gcp-auth=true in profile "addons-994396"
	I1101 08:45:23.425587  535088 addons.go:70] Setting volumesnapshots=true in profile "addons-994396"
	I1101 08:45:23.425594  535088 mustload.go:66] Loading cluster: addons-994396
	I1101 08:45:23.425605  535088 addons.go:239] Setting addon volumesnapshots=true in "addons-994396"
	I1101 08:45:23.425629  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.425759  535088 config.go:182] Loaded profile config "addons-994396": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:45:23.426001  535088 addons.go:70] Setting storage-provisioner=true in profile "addons-994396"
	I1101 08:45:23.426034  535088 addons.go:239] Setting addon storage-provisioner=true in "addons-994396"
	I1101 08:45:23.426060  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.426263  535088 addons.go:70] Setting ingress-dns=true in profile "addons-994396"
	I1101 08:45:23.426312  535088 addons.go:239] Setting addon ingress-dns=true in "addons-994396"
	I1101 08:45:23.426349  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.428071  535088 out.go:179] * Verifying Kubernetes components...
	I1101 08:45:23.430376  535088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:45:23.432110  535088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:45:23.432211  535088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:45:23.432239  535088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:45:23.432548  535088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-994396"
	I1101 08:45:23.433347  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.433599  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:45:23.433622  535088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:45:23.434372  535088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:45:23.434372  535088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:45:23.434372  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:45:23.434399  535088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	W1101 08:45:23.434936  535088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:45:23.434947  535088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:45:23.434397  535088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:45:23.435739  535088 addons.go:239] Setting addon default-storageclass=true in "addons-994396"
	I1101 08:45:23.435133  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.435780  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:45:23.435145  535088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:45:23.435569  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:23.436246  535088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:45:23.436291  535088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:23.437459  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:45:23.436270  535088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:23.437541  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:45:23.437032  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:45:23.437636  535088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:45:23.437844  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:45:23.437918  535088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:45:23.437851  535088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:45:23.437941  535088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:23.438856  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:45:23.437976  535088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:23.438988  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:45:23.439032  535088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:45:23.439073  535088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:45:23.439539  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:45:23.439090  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:45:23.439094  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:45:23.439317  535088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:23.439929  535088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:45:23.439932  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:45:23.439957  535088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:45:23.439990  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:23.440001  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:45:23.440144  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:23.440159  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:45:23.442297  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:23.442308  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:45:23.442298  535088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:45:23.443272  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443791  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.443933  535088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:23.443957  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:45:23.444059  535088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:23.444083  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:45:23.444856  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.444941  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.445160  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:45:23.445705  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.446038  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.446083  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.446929  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.448105  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:45:23.448713  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.449090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450028  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450296  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450327  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450341  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.450369  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.450600  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:45:23.451017  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451085  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.451162  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451241  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.451823  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.451855  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452155  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452274  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452437  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.452519  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452542  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.452550  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452567  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.452769  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453008  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453181  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453204  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.453341  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:45:23.453485  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453526  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453547  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453582  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453698  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453748  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.453776  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.453961  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454247  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454637  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454592  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.454668  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454765  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.454810  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454640  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454828  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.454953  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455189  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455476  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455511  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455565  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.455603  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.455714  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.455949  535088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:45:23.456005  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:23.457369  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:45:23.457390  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:45:23.460387  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.460852  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:23.460874  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:23.461072  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	W1101 08:45:23.763758  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763807  535088 retry.go:31] will retry after 294.020846ms: ssh: handshake failed: read tcp 192.168.39.1:57416->192.168.39.195:22: read: connection reset by peer
	W1101 08:45:23.763891  535088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.763941  535088 retry.go:31] will retry after 247.932093ms: ssh: handshake failed: read tcp 192.168.39.1:57426->192.168.39.195:22: read: connection reset by peer
	I1101 08:45:23.987612  535088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:45:23.987618  535088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:45:24.391549  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:45:24.391592  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:45:24.396118  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:45:24.428988  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:45:24.429026  535088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:45:24.539937  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:45:24.542018  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:45:24.551067  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:45:24.578439  535088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:45:24.578476  535088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:45:24.590870  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:45:24.593597  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:45:24.593630  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:45:24.648891  535088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:24.648945  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:45:24.654530  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:45:24.691639  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:45:24.775174  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:45:24.894476  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:45:24.894518  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:45:25.110719  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:45:25.110755  535088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:45:25.248567  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:45:25.248606  535088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:45:25.251834  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:45:25.279634  535088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.279661  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:45:25.282613  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:25.356642  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:45:25.356672  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:45:25.596573  535088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:45:25.596609  535088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:45:25.610846  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:45:25.610885  535088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:45:25.674735  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:45:25.705462  535088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:25.705495  535088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:45:25.746878  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:45:25.746929  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:45:25.925617  535088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:25.925645  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:45:25.996036  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:45:25.996070  535088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:45:26.051328  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:45:26.240447  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:45:26.240483  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:45:26.408185  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:45:26.436460  535088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:26.436501  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:45:26.557448  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:45:26.557481  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:45:26.856571  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:27.059648  535088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:45:27.059683  535088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:45:27.286113  535088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.298454996s)
	I1101 08:45:27.286197  535088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.298476587s)
	I1101 08:45:27.286240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.890088886s)
	I1101 08:45:27.286229  535088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:45:27.286918  535088 node_ready.go:35] waiting up to 6m0s for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312278  535088 node_ready.go:49] node "addons-994396" is "Ready"
	I1101 08:45:27.312325  535088 node_ready.go:38] duration metric: took 25.37676ms for node "addons-994396" to be "Ready" ...
	I1101 08:45:27.312346  535088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:45:27.312422  535088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:45:27.686576  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:45:27.686612  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:45:27.792267  535088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-994396" context rescaled to 1 replicas
	I1101 08:45:28.140990  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:45:28.141032  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:45:28.704311  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:45:28.704352  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:45:29.292401  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:45:29.292429  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:45:29.854708  535088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:29.854740  535088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:45:30.288568  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:45:30.575091  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.033025614s)
	I1101 08:45:30.862016  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:45:30.865323  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.865769  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:30.865797  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:30.866047  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:31.632521  535088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:45:31.806924  535088 addons.go:239] Setting addon gcp-auth=true in "addons-994396"
	I1101 08:45:31.807015  535088 host.go:66] Checking if "addons-994396" exists ...
	I1101 08:45:31.809359  535088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:45:31.813090  535088 main.go:143] libmachine: domain addons-994396 has defined MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814762  535088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:d2:e3", ip: ""} in network mk-addons-994396: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:54 +0000 UTC Type:0 Mac:52:54:00:2a:d2:e3 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-994396 Clientid:01:52:54:00:2a:d2:e3}
	I1101 08:45:31.814801  535088 main.go:143] libmachine: domain addons-994396 has defined IP address 192.168.39.195 and MAC address 52:54:00:2a:d2:e3 in network mk-addons-994396
	I1101 08:45:31.814989  535088 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/addons-994396/id_rsa Username:docker}
	I1101 08:45:33.008057  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.456928918s)
	I1101 08:45:33.008164  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.417239871s)
	I1101 08:45:33.008205  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.35364594s)
	I1101 08:45:33.008240  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.316568456s)
	I1101 08:45:33.008302  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.233079465s)
	I1101 08:45:33.008386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.756527935s)
	I1101 08:45:33.008524  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725858558s)
	I1101 08:45:33.008553  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.333786806s)
	W1101 08:45:33.008563  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008566  535088 addons.go:480] Verifying addon registry=true in "addons-994396"
	I1101 08:45:33.008586  535088 retry.go:31] will retry after 241.480923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:33.008638  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.957281467s)
	I1101 08:45:33.008733  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.600492861s)
	I1101 08:45:33.008738  535088 addons.go:480] Verifying addon metrics-server=true in "addons-994396"
	I1101 08:45:33.010227  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.470250108s)
	I1101 08:45:33.010253  535088 addons.go:480] Verifying addon ingress=true in "addons-994396"
	I1101 08:45:33.011210  535088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-994396 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:45:33.011218  535088 out.go:179] * Verifying registry addon...
	I1101 08:45:33.012250  535088 out.go:179] * Verifying ingress addon...
	I1101 08:45:33.014024  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:45:33.015512  535088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:45:33.051723  535088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:45:33.051749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.051812  535088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:45:33.051833  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:45:33.111540  535088 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 08:45:33.250325  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:33.619402  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:33.619673  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:33.847569  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.990948052s)
	I1101 08:45:33.847595  535088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.535150405s)
	I1101 08:45:33.847621  535088 api_server.go:72] duration metric: took 10.424417181s to wait for apiserver process to appear ...
	I1101 08:45:33.847629  535088 api_server.go:88] waiting for apiserver healthz status ...
	W1101 08:45:33.847626  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.847652  535088 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1101 08:45:33.847651  535088 retry.go:31] will retry after 218.125549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:45:33.908865  535088 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1101 08:45:33.910593  535088 api_server.go:141] control plane version: v1.34.1
	I1101 08:45:33.910629  535088 api_server.go:131] duration metric: took 62.993472ms to wait for apiserver health ...
	I1101 08:45:33.910638  535088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:45:33.979264  535088 system_pods.go:59] 17 kube-system pods found
	I1101 08:45:33.979341  535088 system_pods.go:61] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:33.979358  535088 system_pods.go:61] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979373  535088 system_pods.go:61] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:33.979381  535088 system_pods.go:61] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:33.979388  535088 system_pods.go:61] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:33.979401  535088 system_pods.go:61] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:33.979413  535088 system_pods.go:61] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:33.979421  535088 system_pods.go:61] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:33.979431  535088 system_pods.go:61] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:33.979438  535088 system_pods.go:61] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:33.979452  535088 system_pods.go:61] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:33.979468  535088 system_pods.go:61] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:33.979480  535088 system_pods.go:61] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:33.979501  535088 system_pods.go:61] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:33.979512  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending
	I1101 08:45:33.979522  535088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:33.979531  535088 system_pods.go:61] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:33.979545  535088 system_pods.go:74] duration metric: took 68.899123ms to wait for pod list to return data ...
	I1101 08:45:33.979563  535088 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:45:34.005592  535088 default_sa.go:45] found service account: "default"
	I1101 08:45:34.005620  535088 default_sa.go:55] duration metric: took 26.049347ms for default service account to be created ...
	I1101 08:45:34.005631  535088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:45:34.029039  535088 system_pods.go:86] 17 kube-system pods found
	I1101 08:45:34.029088  535088 system_pods.go:89] "amd-gpu-device-plugin-vssmp" [a3b8c16e-b583-47df-a5c2-97218d3ec5be] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:45:34.029098  535088 system_pods.go:89] "coredns-66bc5c9577-2rqh8" [b131b2b2-f9b9-4197-8bc7-4d1bc185c804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029109  535088 system_pods.go:89] "coredns-66bc5c9577-8b9dw" [7580a21e-bef2-4e34-84b5-b8f67e32b346] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:45:34.029116  535088 system_pods.go:89] "etcd-addons-994396" [9ed2483c-c69f-483c-a489-238983cc8e9e] Running
	I1101 08:45:34.029123  535088 system_pods.go:89] "kube-apiserver-addons-994396" [0d587a06-f48e-4068-bb17-3a28d8a8d340] Running
	I1101 08:45:34.029128  535088 system_pods.go:89] "kube-controller-manager-addons-994396" [e60002dc-411e-458d-b7ea-affbee71d5a0] Running
	I1101 08:45:34.029139  535088 system_pods.go:89] "kube-ingress-dns-minikube" [d947f942-2149-492a-9b4e-1f9c22405815] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:45:34.029144  535088 system_pods.go:89] "kube-proxy-fbmdq" [dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a] Running
	I1101 08:45:34.029150  535088 system_pods.go:89] "kube-scheduler-addons-994396" [bfc13d51-5be5-4462-b4a9-5d4f37f75bc4] Running
	I1101 08:45:34.029156  535088 system_pods.go:89] "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:45:34.029165  535088 system_pods.go:89] "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:45:34.029173  535088 system_pods.go:89] "registry-6b586f9694-b4ph6" [f2c8e5be-bee4-4b31-a8dc-ee43d6a6430c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:45:34.029184  535088 system_pods.go:89] "registry-creds-764b6fb674-xstzf" [75cdadc5-e3ea-4aae-9002-6dca21e0f758] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:45:34.029194  535088 system_pods.go:89] "registry-proxy-bzs78" [151e456a-63e0-4527-8511-34c4444fef48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:45:34.029202  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2pbx5" [e9e973a4-20dd-4785-a3d6-1557c012cc76] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:45:34.029211  535088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jbkmr" [19dc2ae7-668b-4952-9c2d-6602eac4449e] Pending
	I1101 08:45:34.029232  535088 system_pods.go:89] "storage-provisioner" [a0182754-0c9c-458b-a340-20ec025cb56c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:45:34.029244  535088 system_pods.go:126] duration metric: took 23.605903ms to wait for k8s-apps to be running ...
	I1101 08:45:34.029259  535088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:45:34.029328  535088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:45:34.057589  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.060041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:34.066143  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:45:34.536703  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:34.540613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.033279  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.057492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.517382  535088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.707985766s)
	I1101 08:45:35.519009  535088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:45:35.519008  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.230381443s)
	I1101 08:45:35.519151  535088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-994396"
	I1101 08:45:35.520249  535088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:45:35.521386  535088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:45:35.522322  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:45:35.523075  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:45:35.523091  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:45:35.574185  535088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:45:35.574221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:35.574179  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:35.589220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:35.670403  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:45:35.670443  535088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:45:35.926227  535088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:35.926260  535088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:45:36.028744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.029084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.032411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:36.103812  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:45:36.521069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:36.523012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:36.530349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.024569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.026839  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.029801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.202891  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.952517264s)
	W1101 08:45:37.202946  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.202972  535088 retry.go:31] will retry after 301.106324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:37.203012  535088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.173650122s)
	I1101 08:45:37.203055  535088 system_svc.go:56] duration metric: took 3.173789622s WaitForService to wait for kubelet
	I1101 08:45:37.203071  535088 kubeadm.go:587] duration metric: took 13.779865062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:45:37.203102  535088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:45:37.208388  535088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:45:37.208416  535088 node_conditions.go:123] node cpu capacity is 2
	I1101 08:45:37.208429  535088 node_conditions.go:105] duration metric: took 5.320357ms to run NodePressure ...
	I1101 08:45:37.208441  535088 start.go:242] waiting for startup goroutines ...
	I1101 08:45:37.368099  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.301889566s)
	I1101 08:45:37.504488  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:37.521079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:37.521246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:37.528201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:37.991386  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887518439s)
	I1101 08:45:37.992795  535088 addons.go:480] Verifying addon gcp-auth=true in "addons-994396"
	I1101 08:45:37.995595  535088 out.go:179] * Verifying gcp-auth addon...
	I1101 08:45:37.997651  535088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:45:38.013086  535088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:45:38.013118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.028095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.030768  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.041146  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:38.502928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:38.520170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:38.521930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:38.526766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.004207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.019028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.024223  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.031869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.206009  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.701470957s)
	W1101 08:45:39.206061  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.206085  535088 retry.go:31] will retry after 556.568559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:39.503999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:39.527340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:39.537658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:39.763081  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:40.006287  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.021411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.025825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.028609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:40.507622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:40.523293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:40.527164  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:40.530886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.005619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.021779  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.023058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.028879  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.134842  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.371696885s)
	W1101 08:45:41.134889  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.134933  535088 retry.go:31] will retry after 634.404627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:41.501998  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:41.519483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:41.522699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:41.527571  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:41.769910  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:42.004958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.021144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.021931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.027195  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.501545  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:42.519865  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:42.522754  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:42.526903  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:42.775680  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.00572246s)
	W1101 08:45:42.775745  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:42.775781  535088 retry.go:31] will retry after 1.084498807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:43.002944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.020356  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.020475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.134004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.504736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:43.519636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:43.520489  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:43.525810  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:43.861263  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:44.001829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.019292  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.026202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:44.503149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:44.520624  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:44.520651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:44.526211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:45:44.623495  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:44.623540  535088 retry.go:31] will retry after 1.856024944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:45.001600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.020242  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.022140  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.026024  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:45.507084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:45.523761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:45.524237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:45.529475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.005033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.108846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.109151  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.109369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:46.479732  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:46.503499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:46.520286  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:46.526234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:46.529155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.019094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.023015  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.027997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.507760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:47.519999  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:47.524925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:47.528391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:47.666049  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.186267383s)
	W1101 08:45:47.666140  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:47.666174  535088 retry.go:31] will retry after 4.139204607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:48.003042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.019125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.027235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:48.031596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.722743  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:48.727291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:48.727372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:48.727610  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.004382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.019147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.021814  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.026878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:49.504442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:49.517916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:49.520088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:49.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.001964  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.024108  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.024120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.029503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:50.504014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:50.523676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:50.527259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:50.529569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.002796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.022756  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.022985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.026836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.501595  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:51.523272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:51.526829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:51.530749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:51.806085  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:52.003559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.019381  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.019451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.027431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:52.504756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:52.522177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:52.526818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:52.531367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.001310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.018845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.024989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.029380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:53.104383  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.298241592s)
	W1101 08:45:53.104437  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.104469  535088 retry.go:31] will retry after 2.354213604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:53.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:53.521260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:53.521459  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:53.530531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.465678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.465798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.466036  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:54.466159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:54.562014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:54.562133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:54.562454  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.001120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.025479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.025582  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.026324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:55.460012  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:45:55.504349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:55.519300  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:55.521013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:55.527541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.002846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.025053  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.029411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.032019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.575604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:56.575734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:56.577952  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:56.577981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:56.753301  535088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.293228646s)
	W1101 08:45:56.753349  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:56.753376  535088 retry.go:31] will retry after 4.355574242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:45:57.006174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.021087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.023942  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.029154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:57.505515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:57.520197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:57.523156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:57.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.018201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.022518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.025296  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:58.505701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:58.524023  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:58.526483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:58.536508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.001410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.017471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.020442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.025457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:45:59.501507  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:45:59.519043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:45:59.520094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:45:59.525760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.001248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.017563  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.020984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.026549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:00.501281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:00.519844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:00.521324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:00.525700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.001953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.020105  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.020877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.025885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:01.110059  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:01.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:01.519377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:01.523178  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:01.526440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:01.845885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:01.845957  535088 retry.go:31] will retry after 7.871379914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:02.001335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.019157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.021487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.026236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:02.502141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:02.517119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:02.519718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:02.526453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.002138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.017025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.019806  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.026770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:03.502833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:03.520032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:03.520118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:03.526559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.064971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.065055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.068066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.068526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:04.502308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:04.520197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:04.521585  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:04.526046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.003330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.017484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.026496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:05.501222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:05.517839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:05.520724  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:05.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.001368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.019614  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.020124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.025568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:06.500972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:06.518736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:06.520211  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:06.526135  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.002092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.018836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.020757  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.025238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:07.503063  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:07.517984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:07.519990  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:07.528565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.018162  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.020563  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.026357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:08.501444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:08.517337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:08.519389  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:08.525929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.002578  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.018521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.020246  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.026866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:09.518157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:09.519720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:09.527087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:09.718336  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:10.004096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.021038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.021333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.027767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:10.413712  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.413760  535088 retry.go:31] will retry after 19.114067213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:10.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:10.517730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:10.520404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:10.526363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.002849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.019496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.019995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.026025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:11.501655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:11.518007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:11.521219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:11.525426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.000873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.017867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.020240  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.026060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:12.502263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:12.518472  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:12.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:12.526084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.002272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.025249  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:13.501457  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:13.518992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:13.520857  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:13.526486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.000572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.019408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.020492  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.025038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:14.501826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:14.518060  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:14.520198  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:14.526075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.002744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.018115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.019636  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.025834  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:15.501625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:15.518152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:15.519669  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:15.525079  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.001990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.021114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.022918  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.025425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:16.501061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:16.519200  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:16.519212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:16.525882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.002326  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.017673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.020197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.026945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:17.502364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:17.518476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:17.520804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:17.526128  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.004541  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.020439  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.028122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:18.502479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:18.519387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:18.519499  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:18.525828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.003038  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.019735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.020844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.027661  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:19.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:19.519280  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:19.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:19.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.001793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.018442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.019878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:20.501246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:20.520476  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:20.520774  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:20.525872  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.018221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.019989  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.025817  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:21.501814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:21.518070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:21.520290  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:21.526096  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.002018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.019705  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.021053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.026071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:22.501728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:22.519405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:22.520617  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:22.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.001744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.019715  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.020644  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.025597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:23.502175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:23.519303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:23.520222  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:23.526675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.001582  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.018997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.020524  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.025085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:24.501770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:24.519601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:24.520468  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:24.525222  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.002719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.018650  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.020825  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.026802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:25.501690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:25.517716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:25.520832  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:25.525983  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.002212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.017751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.019488  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.025775  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:26.501873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:26.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:26.519825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:26.526640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.001148  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.019815  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.025796  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:27.502066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:27.518977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:27.520625  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:27.527501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.000982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.018045  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.019539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.026321  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:28.502967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:28.517882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:28.520453  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:28.525074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.002093  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.019794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.021920  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.025114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.502294  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:29.517914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:29.519213  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:29.526478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:29.528534  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:30.001669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.023801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.027674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.029691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:30.252885  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.252962  535088 retry.go:31] will retry after 26.857733331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:46:30.501958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:30.518713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:30.519451  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:30.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.001425  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.019226  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.020064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:31.501882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:31.518669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:31.519450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:31.526794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.001295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.018253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.026067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:32.501521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:32.520301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:32.522051  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:32.526250  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.003215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.018591  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.020188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.026759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:33.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:33.518399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:33.520442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:33.526258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.001781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.019409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.026569  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:34.501910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:34.518388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:34.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:34.526549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.002205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.019931  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:35.501124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:35.517626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:35.519260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:35.526635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.001556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.017651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.020209  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.026600  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:36.501047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:36.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:36.520391  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:36.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.001745  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.017677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.019854  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:37.504677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:37.518518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:37.519504  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:37.527753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.020360  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.026665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:38.501370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:38.517442  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:38.519287  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:38.525990  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.017774  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.019461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.026372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:39.500859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:39.519797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:39.520622  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:39.525917  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.001647  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.017652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.019113  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.025818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:40.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:40.518504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:40.520340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:40.526037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.002231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.017533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.019687  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.025641  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:41.501410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:41.518018  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:41.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:41.527062  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.018556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.020009  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:42.501909  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:42.519346  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:42.521539  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:42.525544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.003422  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.020340  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.026621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:43.501787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:43.517772  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:43.520385  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:43.526006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.001729  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.018572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.020505  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.027512  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:44.500861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:44.517878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:44.519941  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:44.525966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.002733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.022017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.023425  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.027913  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:45.501505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:45.518036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:45.518304  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:45.526497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.000839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.018027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.020574  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.025140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:46.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:46.517267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:46.519576  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:46.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.002664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.019029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.020440  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.026307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:47.502751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:47.518532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:47.519877  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:47.525668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.001531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.017987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.018860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:48.501993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:48.519439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:48.520680  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:48.525869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.003110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.020088  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.020281  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:49.501972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:49.518761  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:49.520450  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:49.526669  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.019111  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.020657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.025651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:50.501137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:50.519077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:50.519422  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:50.526050  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.002264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.017514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.020444  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.026653  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:51.501218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:51.517606  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:51.519711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:51.525538  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.001505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.019403  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.027381  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:52.501030  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:52.519679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:52.520880  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:52.525311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.002074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.017920  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.020689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.025485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:53.501565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:53.518005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:53.518985  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:53.525510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.018972  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:54.501041  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:54.519696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:54.520156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:54.526253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.003167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.017108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.020966  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.025536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:55.501588  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:55.519412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:55.520387  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:55.526801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.001703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.019805  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.025874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:56.501547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:56.518508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:56.519409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:56.527341  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.001269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.017737  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.019765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.026345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:57.111554  535088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:46:57.502821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:57.521781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:57.523859  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:57.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:46:57.837380  535088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:46:57.837579  535088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:46:58.002477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.017866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.019513  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.025873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:58.501877  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:58.518871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:58.519700  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:58.525438  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.004488  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.026436  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.031423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:46:59.033704  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.508129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:46:59.521490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:46:59.521737  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:46:59.526781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.003739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.022791  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.022910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.026491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:00.501517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:00.517703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:00.518550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:00.528527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.010322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.026679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.030087  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.030397  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:01.502386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:01.517530  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:01.522260  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:01.532240  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.002156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.022137  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.023086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.026049  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:02.504322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:02.519252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:02.523461  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:02.528764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.004016  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.019471  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.021442  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.026419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:03.504419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:03.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:03.520406  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:03.525550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.002462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.020193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.021462  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:04.501642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:04.517490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:04.519930  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:04.526445  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.005197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.023123  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.029475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:05.502664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:05.518118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:05.520518  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:05.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.002738  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.019575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.022744  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.026515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:06.502554  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:06.519943  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:06.521590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:06.526208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.004023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.019789  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.020273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.026416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:07.504157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:07.518612  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:07.520773  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:07.527827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.007295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.020757  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.024258  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.031878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:08.505225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:08.518839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:08.521622  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:08.525366  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.003369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.024660  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.024787  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.029399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:09.502978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:09.520074  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:09.520999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:09.527832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.002118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.019490  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.019688  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.026021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:10.502365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:10.517980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:10.519426  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:10.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.000763  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.017778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.019554  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.025361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:11.502621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:11.519369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:11.520248  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:11.525881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.001298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.019652  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.020408  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.026077  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:12.506179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:12.518698  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:12.520608  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:12.525646  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.004165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.021172  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.026558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:13.502399  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:13.517614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:13.520163  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:13.526224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.002692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.018788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:14.502451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:14.519291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:14.520395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:14.528734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.001583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.017574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.019594  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.027073  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:15.502087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:15.518165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:15.518856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:15.526691  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.002848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.019225  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.020564  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.025778  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:16.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:16.518991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:16.520609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:16.525245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.001845  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.019346  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.019684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.026396  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:17.502188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:17.517746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:17.520856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:17.525856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.001858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.018536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.021348  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.026925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:18.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:18.517522  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:18.520124  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:18.525853  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.001850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.019071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.020953  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:19.502259  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:19.517542  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:19.520882  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:19.526825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.001558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.018927  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.020008  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.025511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:20.501320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:20.517732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:20.519487  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:20.526814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.001370  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.018101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.019530  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.025941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:21.501703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:21.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:21.519684  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:21.526074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.001809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.017626  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.019534  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.025673  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:22.501888  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:22.520695  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:22.521501  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:22.527625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.001636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.017676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.019410  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.026546  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:23.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:23.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:23.519741  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:23.525318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.001469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.018681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.021251  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.026297  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:24.500658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:24.517656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:24.520275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:24.526953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.002390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.018753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.021470  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.026724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:25.503080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:25.519469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:25.522083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:25.525703  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.001480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.018730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.019775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.025922  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:26.501850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:26.518460  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:26.520597  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:26.526270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.002686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.017503  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.019988  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.026061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:27.501773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:27.519208  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:27.519306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:27.526944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.001885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.018098  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.020961  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.026254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:28.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:28.519603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:28.521180  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:28.526295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.003607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.018630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.021082  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.026312  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:29.501919  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:29.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:29.519736  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:29.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.002036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.018828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.020404  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.026209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:30.502329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:30.517607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:30.520177  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:30.527152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.003066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.020280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.020496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.026046  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:31.503011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:31.519101  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:31.520154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:31.525819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.001349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.017760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.020383  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.026548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:32.501020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:32.519372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:32.520621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:32.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.001939  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.017981  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.018721  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.025389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:33.502684  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:33.519286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:33.519798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:33.526360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.001915  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.018089  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.018866  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.025884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:34.502109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:34.518315  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:34.520992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:34.525955  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.001980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.020058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.020195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.026107  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:35.502513  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:35.519131  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:35.519364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:35.526431  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.001532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.017633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.019879  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:36.501267  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:36.517441  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:36.519775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:36.526367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.002311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.017625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.020233  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.025830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:37.502486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:37.518494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:37.519337  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:37.526256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.002200  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.017679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.020437  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.025635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:38.502121  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:38.518742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:38.519609  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:38.525528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.001668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.017868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.019195  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.027138  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:39.502726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:39.518837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:39.519527  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:39.525448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.037966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:40.038824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.039617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.039888  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.510995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:40.611235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:40.611494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:40.612020  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.007852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.104319  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.105167  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.106241  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:41.503207  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:41.519701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:41.523717  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:41.528111  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.002832  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.019368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.026027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.028968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:42.504592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:42.518781  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:42.522913  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:42.527017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.002059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.021540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.022732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.027733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:43.501969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:43.523064  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:43.523122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:43.526723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.016033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.048228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:44.048288  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.049707  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.510334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:44.517005  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:44.520734  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:44.527760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.002493  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.025067  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.025090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.030831  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:45.503106  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:45.519233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:45.522740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:45.526357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.003368  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.021702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.023084  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.025372  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:46.507201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:46.528398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:46.528540  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:46.528597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.005313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.021521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.023522  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.030205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:47.508306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:47.517975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:47.523254  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:47.528801  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.004599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.018025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.024054  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.030295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:48.504150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:48.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:48.519937  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:48.527633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.003426  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.021317  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.104457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.105285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:49.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:49.520941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:49.521038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:49.525762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.002168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.018353  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.019606  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.025332  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:50.501342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:50.518265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:50.520375  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:50.526058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.001482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.018509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.018674  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.026149  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:51.502439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:51.518320  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:51.519717  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:51.525114  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.017697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.019121  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.026265  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:52.501713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:52.517565  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:52.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:52.525722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.001345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.018104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.020275  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.025637  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:53.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:53.518670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:53.520663  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:53.525659  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.001263  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.018846  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.019116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.025335  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:54.502071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:54.519000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:54.519010  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:54.525456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.017957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.021189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.026699  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:55.502333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:55.517379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:55.519350  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:55.526773  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.001599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.018008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.020215  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.025828  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:56.501455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:56.517521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:56.519235  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:56.527201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.001827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.020037  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.020749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.025827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:57.503759  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:57.517849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:57.520371  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:57.526800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.002360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.017843  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.026527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:58.501394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:58.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:58.520352  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:58.525725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.002102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.017074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.020520  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.026683  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:47:59.502383  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:47:59.517821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:47:59.520938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:47:59.525444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.004519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.104585  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:00.104625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.104775  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.501109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:00.518462  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:00.519031  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:00.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.001882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.018255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.019640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.025291  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:01.503231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:01.518634  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:01.520274  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:01.526356  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.002389  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.018529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.019411  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.026657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:02.501043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:02.518076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:02.519080  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:02.526504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.001361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.019762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.022333  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.025239  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:03.501714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:03.519163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:03.521149  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:03.526410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.000747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.019676  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.020330  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.026159  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:04.502467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:04.518491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:04.518845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:04.525769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.001664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.019454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.019620  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.027022  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:05.502850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:05.518666  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:05.520316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:05.526009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.002470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.017750  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.019816  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.025697  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:06.501760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:06.519481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:06.519738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:06.525711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.001752  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.017749  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.019804  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.025660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:07.501792  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:07.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:07.519794  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:07.525244  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.002742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.018517  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.020369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.026630  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:08.501587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:08.518305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:08.519219  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:08.526380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.000977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.018805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.019761  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:09.501890  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:09.517987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:09.520782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:09.525601  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.001949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.018921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.020592  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.026413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:10.501660  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:10.518677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:10.518948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:10.525564  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.017692  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.019759  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.025724  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:11.503245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:11.519474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:11.520078  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:11.525649  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.002655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.017994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.020743  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.025544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:12.500866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:12.519004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:12.520797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:12.527102  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.001891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.019380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.020948  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.025584  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:13.502039  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:13.519170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:13.520827  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:13.525891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.002597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.018456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.019344  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:14.501808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:14.518199  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:14.520114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:14.526515  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.000809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.017935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.019860  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.026010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:15.502293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:15.517549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:15.520189  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:15.603271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.001815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.018392  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.020440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.025577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:16.501456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:16.517675  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:16.519938  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:16.525413  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.000943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.017838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.021846  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.026719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:17.502498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:17.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:17.518370  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:17.526307  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.002824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.019386  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.027193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:18.501577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:18.518262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:18.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:18.525078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.002037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.020156  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.021197  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.025423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:19.501921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:19.519607  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:19.520544  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:19.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.001960  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.018434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.020315  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.026179  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:20.503025  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:20.518911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:20.520556  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:20.525269  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.002029  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.024168  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.026997  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.031803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:21.502358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:21.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:21.518786  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:21.525830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.001594  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.017338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.018324  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.025889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:22.503054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:22.520388  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:22.521916  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:22.526202  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.002517  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.020216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.021156  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.028984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:23.500976  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:23.519154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:23.519316  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:23.526809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.002882  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.019205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.020141  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.026965  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:24.501036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:24.518337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:24.519991  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:24.525486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.001657  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.018947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.019127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.025725  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:25.501581  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:25.518560  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:25.520017  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:25.525518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.001825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.018331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.020369  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.026403  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:26.501127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:26.519632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:26.520978  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:26.525884  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.002361  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.020412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.027021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:27.502390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:27.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:27.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:27.525535  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.002688  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.017322  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.019838  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.025328  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:28.501474  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:28.517324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:28.519128  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:28.525804  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.001640  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.017615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.019699  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.025407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:29.501333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:29.518228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:29.520320  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:29.526401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.001257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.017769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.019813  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:30.501852  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:30.517912  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:30.519457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:30.525502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.001036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.018891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.019341  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:31.501891  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:31.517945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:31.519845  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:31.525477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.018364  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.019047  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.025949  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:32.501632  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:32.517753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:32.519551  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:32.525075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.002010  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.019109  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.021003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.025940  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:33.503032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:33.518866  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:33.520801  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:33.525566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.002115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.017835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.020583  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.026191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:34.502465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:34.517620  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:34.520272  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:34.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.000870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.018932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.019718  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.025748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:35.502491  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:35.517523  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:35.519496  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:35.525784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.001520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.019495  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.020061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.026348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:36.501803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:36.519550  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:36.519863  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:36.526033  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.001475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.018365  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.019331  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.026308  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:37.502572  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:37.517421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:37.520211  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:37.525925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.001941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.019309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.020493  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.027497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:38.501822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:38.517786  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:38.520262  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:38.526454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.003835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.019771  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.020317  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.025953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:39.501469  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:39.517769  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:39.519531  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:39.526394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.001467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.018767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.018975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.025574  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:40.501327  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:40.517147  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:40.519793  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:40.525870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.001711  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.019756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.022733  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.025432  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:41.501110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:41.517577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:41.520152  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:41.526331  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.001665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.018212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.020818  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.027301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:42.502145  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:42.518137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:42.520139  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:42.525932  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.002613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.018231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.019849  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.026083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:43.501054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:43.518385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:43.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:43.526209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.002494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.017824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.020797  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.026068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:44.501618  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:44.519136  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:44.519498  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:44.526198  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.001727  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.019695  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.020007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.026210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:45.502382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:45.518209  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:45.520090  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:45.526008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.002275  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.017575  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.020217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.026182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:46.501858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:46.518887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:46.520199  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:46.525849  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.001391  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.017528  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.019856  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.026978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:47.502108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:47.517185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:47.519497  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:47.526193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.002439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.018567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.019868  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.026369  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:48.502252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:48.518245  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:48.519830  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:48.525789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.002157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.017975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.020029  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.026100  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:49.504825  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:49.517735  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:49.522486  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:49.528548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.005615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.019305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.021640  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.027410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:50.501443  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:50.519328  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:50.519829  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:50.526094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.001398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.019374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.020621  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.024951  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:51.501419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:51.517860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:51.519006  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:51.525945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.017274  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.019058  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.025509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:52.501980  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:52.517824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:52.519466  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:52.524793  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.001604  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.018807  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.019698  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.025324  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:53.501302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:53.517854  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:53.519844  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:53.526844  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.001945  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.017746  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.020114  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.025868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:54.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:54.519009  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:54.520308  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:54.525824  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.001176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.017056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.019336  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.026011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:55.502015  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:55.518868  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:55.519785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:55.525794  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.002253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.017282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.020639  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.026305  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:56.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:56.518058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:56.519766  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:56.525982  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.018418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.021050  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.026140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:57.502619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:57.517497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:57.519971  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:57.526180  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.002367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.018215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.020881  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.025867  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:58.502163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:58.518906  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:58.519560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:58.525238  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.002160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.018131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.019720  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.026035  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:48:59.501498  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:48:59.517861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:48:59.520038  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:48:59.525911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.008043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.108599  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.108605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:00.108940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.501986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:00.519116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:00.519363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:00.526237  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.002941  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.018164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.019968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.026086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:01.501165  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:01.518371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:01.519716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:01.526191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.003221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.017756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.020569  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.025532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:02.502303  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:02.517833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:02.520043  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:02.526299  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.001963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.019603  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.020175  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.026074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:03.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:03.518548  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:03.519326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:03.526362  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.001337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.017680  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.020642  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.025160  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:04.501481  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:04.519187  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:04.519354  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:04.526002  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.001164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.017266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.020018  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.025815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:05.501835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:05.518458  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:05.519449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:05.526988  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.001942  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.017559  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.019230  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.027617  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:06.501568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:06.518953  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:06.519722  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:06.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.000827  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.017696  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.019798  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.025714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:07.501984  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:07.519229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:07.520125  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:07.525931  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.002067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.018520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.020314  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.026702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:08.501478  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:08.518992  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:08.519109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:08.525577  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.001061  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.019049  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.019914  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.025870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:09.501375  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:09.517502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:09.520013  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:09.525860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.002219  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.018451  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.019784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.025779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:10.503078  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:10.519196  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:10.519485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:10.528833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.001789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.017702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.019708  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.025298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:11.501809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:11.517966  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:11.520785  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:11.526958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.002467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.017726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.019345  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.026841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:12.501551  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:12.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:12.520217  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:12.526558  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.001536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.018736  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.020611  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.025440  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:13.501358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:13.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:13.519745  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:13.526510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.002283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.017864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.019800  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.025916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:14.502006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:14.519062  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:14.519655  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:14.525994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.005447  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.017234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.019831  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.026557  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:15.501996  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:15.519856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:15.520083  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:15.525230  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.002748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.019355  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.019533  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.025957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:16.502580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:16.517837  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:16.519968  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:16.525850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.001935  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.019152  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.019529  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.025144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:17.503036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:17.518401  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:17.520738  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:17.525739  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.001970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.019682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.026543  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:18.505234  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:18.517615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:18.520770  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:18.525690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.001486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.018177  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.019004  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.025710  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:19.502094  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:19.519521  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:19.520380  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:19.526127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.002068  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.020224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.021127  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.025520  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:20.501694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:20.518963  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:20.520765  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:20.525058  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.007417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.024784  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.025732  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:21.504133  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:21.520851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:21.521975  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:21.528716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.002656  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.019037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.020474  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.026247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:22.501702  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:22.517925  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:22.521095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:22.526859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.002583  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.019101  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.020457  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.025456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:23.502095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:23.518464  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:23.522059  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:23.526260  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.003337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.017841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.021116  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.025850  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:24.501756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:24.518762  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:24.520412  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:24.527410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.001848  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.018927  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.019525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.025681  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:25.501555  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:25.518984  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:25.519924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:25.526028  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.002318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.018839  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.021112  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.025766  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:26.501254  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:26.518654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:26.520701  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:26.525608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.001830  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.017870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.020014  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.026744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:27.501677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:27.519613  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:27.519874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:27.526220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.002947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.019118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.020560  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.025161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:28.501842  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:28.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:28.519678  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:28.525197  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.003014  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.018826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.020409  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.026088  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:29.501916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:29.518127  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:29.520850  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:29.525382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.001229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.017453  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.019095  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.026360  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:30.502510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:30.517380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:30.518702  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:30.525410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.001216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.018086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.020349  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.026668  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:31.502075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:31.518995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:31.519726  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:31.526262  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.011176  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.018083  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.022218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.026390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:32.501928  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:32.518961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:32.519981  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:32.525961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.002956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.018416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.020053  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.026871  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:33.503382  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:33.518628  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:33.520030  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:33.526081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.004511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.017733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.019809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.026157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:34.502455  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:34.517764  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:34.519007  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:34.525748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.002201  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.018354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.020561  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.024986  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:35.501676  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:35.518080  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:35.520259  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:35.526231  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.002290  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.017246  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.019747  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.025424  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:36.502256  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:36.519181  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:36.519361  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:36.526313  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.001733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.017924  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.019432  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.024916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:37.501788  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:37.518994  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:37.520329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:37.526158  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.002306  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.017816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.020329  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.026122  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:38.502214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:38.517689  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:38.519368  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:38.526566  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.001344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.018348  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.021395  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.026118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:39.502411  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:39.519218  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:39.519487  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:39.526004  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.002233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.017415  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.020521  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.026057  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:40.502613  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:40.518860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:40.520188  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:40.526090  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.002091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.018506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.019711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.025910  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:41.502421  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:41.518400  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:41.521296  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:41.527921  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.003104  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.018378  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.020878  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.026161  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:42.502129  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:42.518686  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:42.520170  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:42.525923  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.004390  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.019175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.022158  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.026467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:43.504086  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:43.520367  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:43.520550  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:43.525380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.002978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.103477  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.103494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.104185  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:44.502233  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:44.519809  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:44.519835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:44.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.000496  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.018444  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.019039  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.026510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:45.502226  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:45.517482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:45.520689  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:45.525876  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.001596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.019690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.021682  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.025805  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:46.501418  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:46.517889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:46.520740  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:46.526273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.001808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.018410  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.020658  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.025282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:47.502482  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:47.517540  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:47.520502  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:47.525363  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.002384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.018017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.020110  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.026034  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:48.505672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:48.520527  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:48.523748  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:48.529163  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.002861  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.017744  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.019716  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.025934  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:49.503141  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:49.517174  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:49.519166  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:49.526456  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.001342  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.017719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.020032  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.026547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:50.501789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:50.519072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:50.519782  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:50.525316  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.002325  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.017470  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.021020  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.026334  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:51.504006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:51.518610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:51.520767  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:51.525227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.003295  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.018224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.023940  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.028747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:52.507809  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:52.522785  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:52.523541  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:52.527593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.006856  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.021835  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.023449  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.029978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:53.506277  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:53.523013  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:53.524326  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:53.531084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.006985  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.018665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.023247  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.026006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:54.503056  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:54.519576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:54.522065  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:54.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.003139  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.020881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.022886  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.028847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:55.502733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:55.521726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:55.530711  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:55.532556  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.002638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.021902  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.026061  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.027811  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:56.501943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:56.518059  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:56.520358  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:56.527803  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.001212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.022110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.023066  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.027074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:57.511753  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:57.522407  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:57.525249  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:57.528427  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.003779  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.019398  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.020765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.025087  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:58.502271  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:58.519021  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:58.520012  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:58.526423  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.001770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.028122  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.028948  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.029097  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:49:59.503552  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:49:59.519454  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:49:59.526099  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:49:59.528549  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.002150  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.018589  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.020579  535088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:50:00.026070  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:00.503019  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:00.518818  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:00.521298  535088 kapi.go:107] duration metric: took 4m27.50578325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:50:00.526236  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.004597  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.017417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.026007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:01.503117  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:01.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:01.526118  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.002140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.017309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.026874  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:02.502193  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:02.517206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:02.526479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.002066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.018800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.026667  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:03.501870  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:03.518027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:03.526907  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.001943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.018110  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.026258  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:04.503167  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:04.518066  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:04.526754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.007821  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.017748  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.025450  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:05.501643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:05.518495  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:05.525885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.001380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.017918  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.026946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:06.502671  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:06.518784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:06.526820  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.001754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.019448  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.025975  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:07.502164  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:07.517678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:07.526283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.002858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.019273  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.027420  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:08.501670  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:08.518047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:08.526214  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.001840  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.018206  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.027687  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:09.501188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:09.517532  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:09.526417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.001069  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.018157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.026212  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:10.502289  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:10.518055  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:10.526968  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.001635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.017991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.025970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:11.506621  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:11.517412  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:11.526728  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.001701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.018119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.025969  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:12.502625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:12.517475  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:12.526044  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.002186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.018439  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.026091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:13.500970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:13.519505  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:13.525838  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.001977  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.018285  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.027576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:14.501280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:14.517529  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:14.526733  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.002377  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.018228  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.026340  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:15.502885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:15.517651  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:15.527123  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.001756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.018508  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.026298  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:16.503500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:16.517929  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:16.526229  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.005499  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.105592  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:17.105644  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.501723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:17.518760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:17.525930  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.009252  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.020798  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.026084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:18.502008  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:18.518188  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:18.526054  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.001524  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.017526  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.026186  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:19.501501  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:19.517658  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:19.526525  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.001537  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.017379  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.027037  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:20.501883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:20.518635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:20.525619  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.001489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.026672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:21.501586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:21.517885  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:21.526477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.000991  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.019224  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.027309  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:22.502253  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:22.518048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:22.526007  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.002357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.017858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.027027  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:23.500869  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:23.517747  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:23.526047  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.002561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.018227  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.027043  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:24.502430  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:24.518125  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:24.526108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.002567  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.017833  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.025933  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:25.502126  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:25.517859  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:25.526354  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.000814  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.017887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.026568  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:26.502946  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:26.518678  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:26.526480  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.001266  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.017216  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.026609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:27.501961  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:27.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:27.526911  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.002183  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.017072  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.026509  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:28.503467  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:28.517754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:28.525800  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.001730  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.018081  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.026318  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:29.503000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:29.518477  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:29.525663  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.001609  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.018380  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.027170  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:30.502338  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:30.518067  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:30.526337  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.001716  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.019042  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.026553  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:31.502516  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:31.517742  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:31.526076  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.003220  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.017115  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.026003  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:32.503084  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:32.520638  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:32.525815  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.002310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.017855  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.026358  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:33.501484  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:33.518215  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:33.527345  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.001194  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.018531  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.026371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:34.501860  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:34.518822  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:34.526665  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.000987  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.018881  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.026261  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:35.503065  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:35.519434  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:35.526091  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.002048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.019887  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.026789  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:36.502205  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:36.518344  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:36.527132  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.001713  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.018302  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.027636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:37.502137  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:37.518679  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:37.526770  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.002674  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.018502  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.025131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:38.502841  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:38.518479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:38.525394  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.003210  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.017479  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.026633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:39.501409  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:39.517624  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:39.525765  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.001504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.017795  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.026635  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:40.504580  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:40.518573  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:40.526384  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.000864  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.018489  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.025191  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:41.501782  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:41.518173  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:41.526463  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.000518  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.017873  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.027131  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:42.502017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:42.518539  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:42.526000  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.002999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.018398  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.027329  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:43.501816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:43.518023  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:43.526878  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.002714  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.018483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.026808  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:44.502514  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:44.517486  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:44.525494  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.000916  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.017682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.026270  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:45.504311  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:45.517633  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:45.529587  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.005819  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.019419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:46.028247  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.501836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:46.603570  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:46.604017  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.002957  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.020722  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.103677  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:47.504417  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:50:47.529109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:47.535255  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.027116  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.027384  535088 kapi.go:107] duration metric: took 5m10.029733807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:50:48.029168  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:48.029460  535088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-994396 cluster.
	I1101 08:50:48.030850  535088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:50:48.032437  535088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:50:48.524544  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:48.531119  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.018726  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.026282  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:49.518154  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:49.526614  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.018751  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.026031  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:50.518756  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:50.526155  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.018153  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.026760  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:51.518286  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:51.526672  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.017371  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.027754  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:52.518074  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:52.526416  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.018974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.026602  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:53.518144  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:53.526654  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.018625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.026704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:54.517492  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:54.525999  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.019257  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.027958  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:55.518075  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:55.526142  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.018092  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.025605  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:56.518596  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:56.525863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.017562  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.025851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:57.518709  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:57.526387  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.018590  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.025978  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:58.517643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:58.525642  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.018664  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.025863  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:50:59.517006  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:50:59.527349  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.020576  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.029108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:00.518333  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:00.527511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.018504  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.027157  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:01.518405  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:01.526704  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.018500  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.026694  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:02.517768  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:02.526967  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.018243  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.026700  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:03.517836  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:03.526719  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.017510  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.025944  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:04.517662  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:04.526213  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.019140  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.026847  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:05.522889  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:05.526826  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.017784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.026272  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:06.517992  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:06.527109  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.018586  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.026175  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:07.518974  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:07.526376  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.018995  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.026615  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:08.517947  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:08.526011  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.018511  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.025631  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:09.518218  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:09.526593  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.018682  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.026784  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:10.519095  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:10.527301  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.018993  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.025690  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:11.518483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:11.526408  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.018208  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.027483  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:12.518108  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:12.528506  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.018723  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.026036  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:13.519547  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:13.525883  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.017886  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.026485  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:14.518428  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:14.526099  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.018816  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.028223  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:15.517235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:15.526608  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.019497  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.026823  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:16.518374  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:16.526536  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.019643  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.026636  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:17.519221  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:17.527357  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.018310  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.027561  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:18.517385  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:18.526970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.018802  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.026280  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:19.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:19.527610  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.017707  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.028465  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:20.518519  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:20.526293  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:51:21.026625  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:21.030779  535088 kapi.go:107] duration metric: took 5m45.508455317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:51:21.518734  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.018071  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:22.517851  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.022943  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:23.518235  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.018970  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:24.517611  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.019971  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:25.519134  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.018419  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:26.518767  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.018701  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:27.519283  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.019085  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:28.518032  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.019182  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:29.519048  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.018264  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:30.518858  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.018124  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:31.519120  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.021956  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:32.519959  535088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:51:33.014506  535088 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1101 08:51:33.014547  535088 kapi.go:107] duration metric: took 6m0.000528296s to wait for kubernetes.io/minikube-addons=registry ...
	W1101 08:51:33.014668  535088 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1101 08:51:33.016548  535088 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, nvidia-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:51:33.017988  535088 addons.go:515] duration metric: took 6m9.594756816s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner cloud-spanner ingress-dns nvidia-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots ingress gcp-auth csi-hostpath-driver]
	I1101 08:51:33.018036  535088 start.go:247] waiting for cluster config update ...
	I1101 08:51:33.018057  535088 start.go:256] writing updated cluster config ...
	I1101 08:51:33.018363  535088 ssh_runner.go:195] Run: rm -f paused
	I1101 08:51:33.027702  535088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:33.035072  535088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.039692  535088 pod_ready.go:94] pod "coredns-66bc5c9577-2rqh8" is "Ready"
	I1101 08:51:33.039727  535088 pod_ready.go:86] duration metric: took 4.614622ms for pod "coredns-66bc5c9577-2rqh8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.041954  535088 pod_ready.go:83] waiting for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.046075  535088 pod_ready.go:94] pod "etcd-addons-994396" is "Ready"
	I1101 08:51:33.046103  535088 pod_ready.go:86] duration metric: took 4.126087ms for pod "etcd-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.048189  535088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.052772  535088 pod_ready.go:94] pod "kube-apiserver-addons-994396" is "Ready"
	I1101 08:51:33.052802  535088 pod_ready.go:86] duration metric: took 4.587761ms for pod "kube-apiserver-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.055446  535088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.433771  535088 pod_ready.go:94] pod "kube-controller-manager-addons-994396" is "Ready"
	I1101 08:51:33.433801  535088 pod_ready.go:86] duration metric: took 378.329685ms for pod "kube-controller-manager-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:33.634675  535088 pod_ready.go:83] waiting for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.034403  535088 pod_ready.go:94] pod "kube-proxy-fbmdq" is "Ready"
	I1101 08:51:34.034444  535088 pod_ready.go:86] duration metric: took 399.738812ms for pod "kube-proxy-fbmdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.233978  535088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633095  535088 pod_ready.go:94] pod "kube-scheduler-addons-994396" is "Ready"
	I1101 08:51:34.633131  535088 pod_ready.go:86] duration metric: took 399.109096ms for pod "kube-scheduler-addons-994396" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:51:34.633149  535088 pod_ready.go:40] duration metric: took 1.605381934s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:51:34.682753  535088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:51:34.684612  535088 out.go:179] * Done! kubectl is now configured to use "addons-994396" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.474614247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987253474586254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3be942b-8cfb-4d9b-85de-267720ef34a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.475377181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4e53c8f-f964-4661-a0b7-de614376eae4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.475505499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4e53c8f-f964-4661-a0b7-de614376eae4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.476294533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4e53c8f-f964-4661-a0b7-de614376eae4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.523273839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7069b059-71f9-4f07-bbae-a8833c7f034f name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.523429417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7069b059-71f9-4f07-bbae-a8833c7f034f name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.524924230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a00e544f-ded5-4239-b46b-61d1d07dcddc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.526802416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987253526774213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a00e544f-ded5-4239-b46b-61d1d07dcddc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.527756828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0b0b888-0cc4-49ed-a922-a29773fdfc7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.527843574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0b0b888-0cc4-49ed-a922-a29773fdfc7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.528602891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0b0b888-0cc4-49ed-a922-a29773fdfc7d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.567107828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f826338d-8a79-4e9b-972d-9ba5c4a4bc64 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.567180524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f826338d-8a79-4e9b-972d-9ba5c4a4bc64 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.568696927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=883c3103-432c-466b-8641-707a6d7de381 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.570053171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987253569982560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=883c3103-432c-466b-8641-707a6d7de381 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.571357704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b60f5241-fbb2-44f0-b4cf-721e8a2babee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.571661445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b60f5241-fbb2-44f0-b4cf-721e8a2babee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.573787050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b60f5241-fbb2-44f0-b4cf-721e8a2babee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.621147298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25475cb3-9b1a-47b2-94d3-f8becd170b40 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.621244582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25475cb3-9b1a-47b2-94d3-f8becd170b40 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.622705306Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca9aa2e3-18de-48ef-889b-ba5f3248354b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.624273521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761987253624245815,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:454585,},InodesUsed:&UInt64Value{Value:166,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca9aa2e3-18de-48ef-889b-ba5f3248354b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.624940322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4890c79d-f93f-4487-b301-aedc556dac39 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.625018242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4890c79d-f93f-4487-b301-aedc556dac39 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:54:13 addons-994396 crio[817]: time="2025-11-01 08:54:13.625508004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9aac7eb34690309e8dbd81343ee4a3afed4182f729bfb09119b2d0449fcb5163,PodSandboxId:cdbcecc3e9d43396748d11feb94389c468413b4e4db1f33c0ffbb67ba8cb8455,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761987117609973399,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c914a21ca5c30d325bf10151384a21f9bbcc7e25b2d34ca61bfaddd16505122,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761987080383755595,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437ef3bce50ac8a7ca0b9a31a96e010fea2dd24bba8a7a5f778f7bb5721a6a9d,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761987048807726890,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f73cee1644b036ab76f839b96acf06de4009bbf807c978116290374a0b56065c,PodSandboxId:147663b03fe636d80386c5b9e498c5fb95c78d278121e7fb146f12c7e973609d,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761986999531197788,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-9cxnd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf616938-c2ab-4f4c-92c8-9fa4ab2f6be9,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:862808e2ff30fdd764f8aaf3d5b1a5df067d9f837db07ff0372f86bd3b55cab5,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761986992483188170,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4eac7bee2514139306d8419dc1c70f3cc677629e0546239a0322053b09eab44,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761986961550289998,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89e19f39781eba8b57e656eb2450f2409f9b0faf0e3401335506a480d9066dc6,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761986930173408810,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68bf99b640c16170eb3d1decd09fc1b538fbd6fde76792990703d14d18fd9728,PodSandboxId:c090988aa5e05ea1d7a0662eb99922460d3efcf1e9882123710f19fefe939704,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761986868787532616,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf63ab79-b3fa-4917-a62b-a0758d1521b0,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39137378c3801cd49058632db343f950f188a84e2ff8cf681c71963efac4314f,PodSandboxId:6eaf5e212ad1c55657254e78247ce413b9c2d3e12e8e2cd69b6ccde788266623,Metadata:&ContainerMetadata{Name
:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761986866382667222,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ee1d9b2-a99a-4003-9c65-77bd5e500b0a,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80b7ac026d7558ab3c69afb722ff55dfe32d67be3e2bf197089b95da3dd31104,PodSandboxId:5ef1abbd77f24535b60585d2197c8a2259c59626ad0eb005b609003b505409e3,Metada
ta:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864620312300,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-jbkmr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19dc2ae7-668b-4952-9c2d-6602eac4449e,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63011b6ec66fda56834e6c96c9772b128675e14e51fd5b96d9518a8ba29fa35,PodSandbox
Id:eeeab7772fb0e74c5be38da53381a6b90d0d5c26e9c8b732d2e1c6eb63671c65,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761986864516805400,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-2pbx5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9e973a4-20dd-4785-a3d6-1557c012cc76,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6
e0352b147e8a8fe43c9d94072f3f3fcc98914a55a5718cfd5fe168dcdb81f49,PodSandboxId:89c5974bdcafdcb05490f9f2c95711e64f78832b2759c64ede44020fbdcc0db8,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761986863046366251,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-7l7ps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c291ec-002e-43dc-acb1-5bc4483fa6fd,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fbb154c5ba009280da1a426866a4cdde2195fb0006640dafb05c0da182a4866,PodSandboxId:058d4f2c90db7e8eae07ad5783426e56e467541eacbcb171f0f9227663407e68,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861153109309,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dmt9r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7e49bedc-b72d-400d-bc07-62040e55ac39,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e6c68a57ee535127b46ca112ce1439ee32d248af87fb4452856eb3e38c8eb2e,PodSandboxId:a5dfb28615faf962ed89b8003d79c80e87152c2a8d669af58898bd3254030389,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761986861018576547,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ptqs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9fe7abf8-c7e2-47ee-ac99-699c34674a22,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2226436f827529da95ea6b9148e9aad9e62a07499351f701e80b097311d036,PodSandboxId:c449271f0824b108061a1ee1fc23fbe6d16056014d0cfc3011aa2c20b94a8e24,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b1c9f9ef5f0c2a10135fe0324effdb7d594d50e15bb2c6921177b9db038f1d21,State:CONTAINER_RUNNING,CreatedAt:1761986829754350164,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-bzs78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151e456a-63e0-4527-8511-34c4444fef48,},Annotations:map[string]string{io.kubernetes.container.hash: 3448d551,io.kubernetes.container.
ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda41d22ea7ff808cb20920820ccf87f95d0c484f75f853dec58fc5d4aaa461b,PodSandboxId:e07af8e7a3ecad5569ae3da9545b988c374ac9f7b90e8533dd68c1dd6ecef92c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761986826170977750,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-z8nnd,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c555360c-9a9f-4f
dd-aa67-f18c3d2a4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b56bd6c195bd711f17cd7b927c9fbb20679383d08b6e954d3297e9850be5235,PodSandboxId:6d69749ca9bc78fa01c49c7d0757f3d0eafa3536279a622367a1a3b427e5d70c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761986821805194743,Labels:map[string]string{io.kubernetes.container.name: local-pa
th-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-9ghvj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: d3c3231a-40d9-42f1-bc78-e2d1a104327a,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b4c1be283a7f47690c854c85c4dcacc3e8b42f6727081c4a8a73e3e44c1d194,PodSandboxId:9f7ac0dd48cc1abeb4273f865cde830d51e77c8bd29a6c76ccecaf35745e99f7,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:176198675844940796
3,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d947f942-2149-492a-9b4e-1f9c22405815,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3,PodSandboxId:ca1dd787f338ac0254f2b930b7369f671d7ee68d7732bee6af1cf786d745c456,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c887
2c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761986733821709901,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0182754-0c9c-458b-a340-20ec025cb56c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb5f4d4e768dfe5c0cf6bc80363bf72a32d74ddba50c19fc7e3e82b2268e1d3,PodSandboxId:fec37181f6706eb4994bc850d0e6623521190c923720024b4407780ba5c3168a,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761986732059653348,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-vssmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b8c16e-b583-47df-a5c2-97218d3ec5be,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1,PodSandboxId:d62d15d11c4955eb24e7866e8b7732b6d4471d399c0e33cef74d06eb40917eec,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e
0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761986725130503569,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-2rqh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b131b2b2-f9b9-4197-8bc7-4d1bc185c804,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d0a2f86b38f42fab057b3fea7994c150
73ec1d05f3db97341f0fed0ad342cf9,PodSandboxId:e1fb2fcb1123b9a18ac17a1d8481c82478eed03828d094aab60d26b7c2f58bbd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761986724242985390,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fbmdq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5dd6b4-2f38-4c9d-acd8-92f7984fd96a,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5,P
odSandboxId:d4cfa30f1a32a450d85f51370323574b5a0bcae75643efe39250a8b24cc1a1c1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761986712208719638,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0eeda84be59c6c1c023d04bf2f88758,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde,PodSandboxId:e2f739ab181cd43a508788c71e0d98b6ca0994d643a2896de2364e7f842ffa0d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761986712197993742,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d081dd6df6b55662a095a017ad5712,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.k
ubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25,PodSandboxId:f1c88f09470e5834b2b0cfcdaddaf03ac25c10fd6f3492dc69b5941eb059bbae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761986712168522475,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abcff5cb337834c6fd7a11d68a6b7be4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8,PodSandboxId:80615bf9878bb70db26be3ecace94169c4b7e503113541f10f7df27e95d8c035,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761986712170158026,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-994396,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5912e2b5f9c4192157a57bf3d5021f7e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505
,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4890c79d-f93f-4487-b301-aedc556dac39 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9aac7eb346903       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   cdbcecc3e9d43       busybox
	8c914a21ca5c3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          2 minutes ago       Running             csi-snapshotter                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	437ef3bce50ac       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	f73cee1644b03       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             4 minutes ago       Running             controller                               0                   147663b03fe63       ingress-nginx-controller-675c5ddd98-9cxnd
	862808e2ff30f       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            4 minutes ago       Running             liveness-probe                           0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	a4eac7bee2514       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           4 minutes ago       Running             hostpath                                 0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	89e19f39781eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                5 minutes ago       Running             node-driver-registrar                    0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	68bf99b640c16       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   c090988aa5e05       csi-hostpath-resizer-0
	39137378c3801       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   6eaf5e212ad1c       csi-hostpath-attacher-0
	80b7ac026d755       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   5ef1abbd77f24       snapshot-controller-7d9fbc56b8-jbkmr
	a63011b6ec66f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   eeeab7772fb0e       snapshot-controller-7d9fbc56b8-2pbx5
	6e0352b147e8a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   89c5974bdcafd       csi-hostpathplugin-7l7ps
	7fbb154c5ba00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   6 minutes ago       Exited              patch                                    0                   058d4f2c90db7       ingress-nginx-admission-patch-dmt9r
	5e6c68a57ee53       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   6 minutes ago       Exited              create                                   0                   a5dfb28615faf       ingress-nginx-admission-create-6ptqs
	6d2226436f827       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              7 minutes ago       Running             registry-proxy                           0                   c449271f0824b       registry-proxy-bzs78
	dda41d22ea7ff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            7 minutes ago       Running             gadget                                   0                   e07af8e7a3eca       gadget-z8nnd
	9b56bd6c195bd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   6d69749ca9bc7       local-path-provisioner-648f6765c9-9ghvj
	7b4c1be283a7f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               8 minutes ago       Running             minikube-ingress-dns                     0                   9f7ac0dd48cc1       kube-ingress-dns-minikube
	2ad7748982f90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   ca1dd787f338a       storage-provisioner
	9bb5f4d4e768d       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   fec37181f6706       amd-gpu-device-plugin-vssmp
	9d0ff7b8e8784       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   d62d15d11c495       coredns-66bc5c9577-2rqh8
	9d0a2f86b38f4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   e1fb2fcb1123b       kube-proxy-fbmdq
	80489befa62b8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             9 minutes ago       Running             kube-scheduler                           0                   d4cfa30f1a32a       kube-scheduler-addons-994396
	844d913e662bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             9 minutes ago       Running             etcd                                     0                   e2f739ab181cd       etcd-addons-994396
	35bb45a49c1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             9 minutes ago       Running             kube-controller-manager                  0                   80615bf9878bb       kube-controller-manager-addons-994396
	fdeec4098b47d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             9 minutes ago       Running             kube-apiserver                           0                   f1c88f09470e5       kube-apiserver-addons-994396
	
	
	==> coredns [9d0ff7b8e8784408623315cf07e8942d13f74e52cb65ad09e2d25796114020c1] <==
	[INFO] 10.244.0.8:48697 - 51150 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000184715s
	[INFO] 10.244.0.8:48250 - 16972 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000493162s
	[INFO] 10.244.0.8:48250 - 61162 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000136279s
	[INFO] 10.244.0.8:48250 - 49360 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000096512s
	[INFO] 10.244.0.8:48250 - 26216 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000299264s
	[INFO] 10.244.0.8:48250 - 35874 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000202457s
	[INFO] 10.244.0.8:48250 - 61811 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000118298s
	[INFO] 10.244.0.8:48250 - 53835 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000198445s
	[INFO] 10.244.0.8:48250 - 29976 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000256226s
	[INFO] 10.244.0.8:45072 - 62111 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000278775s
	[INFO] 10.244.0.8:45072 - 13674 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00013623s
	[INFO] 10.244.0.8:45072 - 20926 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000480748s
	[INFO] 10.244.0.8:45072 - 36103 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00034739s
	[INFO] 10.244.0.8:45072 - 63009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093672s
	[INFO] 10.244.0.8:45072 - 15268 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082975s
	[INFO] 10.244.0.8:45072 - 12119 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000142179s
	[INFO] 10.244.0.8:45072 - 51564 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000099184s
	[INFO] 10.244.0.8:50589 - 52847 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00015749s
	[INFO] 10.244.0.8:50589 - 23101 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000334642s
	[INFO] 10.244.0.8:50589 - 51090 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00016402s
	[INFO] 10.244.0.8:50589 - 14793 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000308447s
	[INFO] 10.244.0.8:50589 - 55547 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000242913s
	[INFO] 10.244.0.8:50589 - 33282 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000690858s
	[INFO] 10.244.0.8:50589 - 16331 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000090065s
	[INFO] 10.244.0.8:50589 - 3265 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000065851s
	
	
	==> describe nodes <==
	Name:               addons-994396
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-994396
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-994396
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_45_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-994396
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-994396"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:45:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-994396
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:54:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:52:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:52:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:52:28 +0000   Sat, 01 Nov 2025 08:45:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:52:28 +0000   Sat, 01 Nov 2025 08:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-994396
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 47158355a9594cbf84ea23a10000597a
	  System UUID:                47158355-a959-4cbf-84ea-23a10000597a
	  Boot ID:                    8b22796c-545f-4b51-954a-eb39441cd160
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-z8nnd                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-9cxnd                     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m41s
	  kube-system                 amd-gpu-device-plugin-vssmp                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 coredns-66bc5c9577-2rqh8                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m50s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpathplugin-7l7ps                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 etcd-addons-994396                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m57s
	  kube-system                 kube-apiserver-addons-994396                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m57s
	  kube-system                 kube-controller-manager-addons-994396                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m56s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kube-proxy-fbmdq                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-scheduler-addons-994396                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 registry-6b586f9694-b4ph6                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 registry-creds-764b6fb674-xstzf                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 registry-proxy-bzs78                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 snapshot-controller-7d9fbc56b8-2pbx5                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 snapshot-controller-7d9fbc56b8-jbkmr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  local-path-storage          helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e    0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  local-path-storage          local-path-provisioner-648f6765c9-9ghvj                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-j8882                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     8m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (9%)  426Mi (10%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m48s                kube-proxy       
	  Normal  Starting                 9m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m2s (x8 over 9m2s)  kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m2s (x8 over 9m2s)  kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s (x7 over 9m2s)  kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m56s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m55s                kubelet          Node addons-994396 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m55s                kubelet          Node addons-994396 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m55s                kubelet          Node addons-994396 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m55s                kubelet          Node addons-994396 status is now: NodeReady
	  Normal  RegisteredNode           8m51s                node-controller  Node addons-994396 event: Registered Node addons-994396 in Controller
	
	
	==> dmesg <==
	[  +1.563212] kauditd_printk_skb: 324 callbacks suppressed
	[  +1.257687] kauditd_printk_skb: 290 callbacks suppressed
	[  +1.044955] kauditd_printk_skb: 392 callbacks suppressed
	[  +9.269860] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.205621] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 08:46] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 08:47] kauditd_printk_skb: 32 callbacks suppressed
	[ +34.333332] kauditd_printk_skb: 101 callbacks suppressed
	[  +3.822306] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.002792] kauditd_printk_skb: 88 callbacks suppressed
	[Nov 1 08:49] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000133] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.240953] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 08:50] kauditd_printk_skb: 17 callbacks suppressed
	[ +34.452421] kauditd_printk_skb: 2 callbacks suppressed
	[Nov 1 08:51] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000047] kauditd_printk_skb: 5 callbacks suppressed
	[ +21.931610] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 08:52] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.008516] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.922747] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.151130] kauditd_printk_skb: 37 callbacks suppressed
	[ +11.857033] kauditd_printk_skb: 84 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [844d913e662bc4587cf597763a1bad42bb8a4bf500ce948d822cfcb86a7e9fde] <==
	{"level":"info","ts":"2025-11-01T08:47:40.022473Z","caller":"traceutil/trace.go:172","msg":"trace[1681082771] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1099; }","duration":"172.511317ms","start":"2025-11-01T08:47:39.849952Z","end":"2025-11-01T08:47:40.022464Z","steps":["trace[1681082771] 'agreement among raft nodes before linearized reading'  (duration: 172.390682ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:40.023057Z","caller":"traceutil/trace.go:172","msg":"trace[1789560008] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"190.55839ms","start":"2025-11-01T08:47:39.832485Z","end":"2025-11-01T08:47:40.023044Z","steps":["trace[1789560008] 'process raft request'  (duration: 189.801558ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:46.238190Z","caller":"traceutil/trace.go:172","msg":"trace[1673998268] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"130.656798ms","start":"2025-11-01T08:47:46.107519Z","end":"2025-11-01T08:47:46.238176Z","steps":["trace[1673998268] 'process raft request'  (duration: 130.530561ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:54.978149Z","caller":"traceutil/trace.go:172","msg":"trace[879398792] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1248; }","duration":"128.792993ms","start":"2025-11-01T08:47:54.849340Z","end":"2025-11-01T08:47:54.978133Z","steps":["trace[879398792] 'read index received'  (duration: 128.787273ms)","trace[879398792] 'applied index is now lower than readState.Index'  (duration: 4.859µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:47:54.978274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.918573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:47:54.978294Z","caller":"traceutil/trace.go:172","msg":"trace[478888116] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1194; }","duration":"128.951874ms","start":"2025-11-01T08:47:54.849337Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[478888116] 'agreement among raft nodes before linearized reading'  (duration: 128.896473ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:47:54.978301Z","caller":"traceutil/trace.go:172","msg":"trace[127276739] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"193.938157ms","start":"2025-11-01T08:47:54.784350Z","end":"2025-11-01T08:47:54.978289Z","steps":["trace[127276739] 'process raft request'  (duration: 193.811655ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:03.807211Z","caller":"traceutil/trace.go:172","msg":"trace[306428088] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"143.076836ms","start":"2025-11-01T08:50:03.664107Z","end":"2025-11-01T08:50:03.807184Z","steps":["trace[306428088] 'process raft request'  (duration: 142.860459ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:50:30.399983Z","caller":"traceutil/trace.go:172","msg":"trace[417490432] transaction","detail":"{read_only:false; response_revision:1462; number_of_response:1; }","duration":"105.005558ms","start":"2025-11-01T08:50:30.294965Z","end":"2025-11-01T08:50:30.399970Z","steps":["trace[417490432] 'process raft request'  (duration: 104.840267ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785305Z","caller":"traceutil/trace.go:172","msg":"trace[446064097] linearizableReadLoop","detail":"{readStateIndex:1675; appliedIndex:1675; }","duration":"202.139299ms","start":"2025-11-01T08:51:25.583130Z","end":"2025-11-01T08:51:25.785270Z","steps":["trace[446064097] 'read index received'  (duration: 202.133895ms)","trace[446064097] 'applied index is now lower than readState.Index'  (duration: 4.594µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:51:25.785474Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"202.320618ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:51:25.785498Z","caller":"traceutil/trace.go:172","msg":"trace[2127751376] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1576; }","duration":"202.392505ms","start":"2025-11-01T08:51:25.583101Z","end":"2025-11-01T08:51:25.785493Z","steps":["trace[2127751376] 'agreement among raft nodes before linearized reading'  (duration: 202.298341ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:51:25.785518Z","caller":"traceutil/trace.go:172","msg":"trace[25251410] transaction","detail":"{read_only:false; response_revision:1577; number_of_response:1; }","duration":"230.552599ms","start":"2025-11-01T08:51:25.554955Z","end":"2025-11-01T08:51:25.785507Z","steps":["trace[25251410] 'process raft request'  (duration: 230.448007ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027453Z","caller":"traceutil/trace.go:172","msg":"trace[1612683542] linearizableReadLoop","detail":"{readStateIndex:1872; appliedIndex:1872; }","duration":"169.871386ms","start":"2025-11-01T08:52:17.857553Z","end":"2025-11-01T08:52:18.027424Z","steps":["trace[1612683542] 'read index received'  (duration: 169.865757ms)","trace[1612683542] 'applied index is now lower than readState.Index'  (duration: 4.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:18.027601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.004057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:18.027618Z","caller":"traceutil/trace.go:172","msg":"trace[354966435] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1760; }","duration":"170.064613ms","start":"2025-11-01T08:52:17.857549Z","end":"2025-11-01T08:52:18.027613Z","steps":["trace[354966435] 'agreement among raft nodes before linearized reading'  (duration: 169.976661ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:18.027617Z","caller":"traceutil/trace.go:172","msg":"trace[182557049] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1761; }","duration":"175.595316ms","start":"2025-11-01T08:52:17.852012Z","end":"2025-11-01T08:52:18.027607Z","steps":["trace[182557049] 'process raft request'  (duration: 175.503416ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.484737Z","caller":"traceutil/trace.go:172","msg":"trace[1326759402] linearizableReadLoop","detail":"{readStateIndex:1904; appliedIndex:1904; }","duration":"340.503004ms","start":"2025-11-01T08:52:23.144214Z","end":"2025-11-01T08:52:23.484717Z","steps":["trace[1326759402] 'read index received'  (duration: 340.496208ms)","trace[1326759402] 'applied index is now lower than readState.Index'  (duration: 5.868µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:52:23.485008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.771395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2025-11-01T08:52:23.485058Z","caller":"traceutil/trace.go:172","msg":"trace[1039449345] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1790; }","duration":"340.841883ms","start":"2025-11-01T08:52:23.144209Z","end":"2025-11-01T08:52:23.485051Z","steps":["trace[1039449345] 'agreement among raft nodes before linearized reading'  (duration: 340.62868ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485106Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.144193Z","time spent":"340.902265ms","remote":"127.0.0.1:36552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T08:52:23.485553Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.574901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:52:23.485588Z","caller":"traceutil/trace.go:172","msg":"trace[1585287071] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1791; }","duration":"287.617514ms","start":"2025-11-01T08:52:23.197963Z","end":"2025-11-01T08:52:23.485581Z","steps":["trace[1585287071] 'agreement among raft nodes before linearized reading'  (duration: 287.549253ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:52:23.485660Z","caller":"traceutil/trace.go:172","msg":"trace[1103263823] transaction","detail":"{read_only:false; response_revision:1791; number_of_response:1; }","duration":"361.459988ms","start":"2025-11-01T08:52:23.124191Z","end":"2025-11-01T08:52:23.485651Z","steps":["trace[1103263823] 'process raft request'  (duration: 361.180443ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:52:23.485795Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:52:23.124175Z","time spent":"361.507625ms","remote":"127.0.0.1:36760","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1766 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	
	
	==> kernel <==
	 08:54:14 up 9 min,  0 users,  load average: 0.33, 0.59, 0.42
	Linux addons-994396 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fdeec4098b47d6e27b77f71ac1761aeb26a09c97d53566cde6a7c5ae79150c25] <==
	W1101 08:45:52.368142       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:45:52.421009       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1101 08:45:52.435322       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:46:31.751759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.751828       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 08:46:31.751848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 08:46:31.752853       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:46:31.752966       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 08:46:31.753020       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 08:48:03.292013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	W1101 08:48:03.296407       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:48:03.296747       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:48:03.297742       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	E1101 08:48:03.298496       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.19.139:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.19.139:443: connect: connection refused" logger="UnhandledError"
	I1101 08:48:03.353240       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:52:03.525330       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42910: use of closed network connection
	E1101 08:52:03.723785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.195:8443->192.168.39.1:42940: use of closed network connection
	I1101 08:52:12.984624       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.226.149"}
	I1101 08:53:04.341444       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [35bb45a49c1f528c9112deb8bfa037389ae6fae43afcbb2f86e4c3ed61156bf8] <==
	I1101 08:45:22.366258       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 08:45:22.366364       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 08:45:22.366391       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:45:22.366402       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 08:45:22.367748       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 08:45:22.368440       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 08:45:22.374762       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	E1101 08:45:30.978623       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1101 08:45:52.325414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:45:52.326150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:45:52.326217       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:45:52.371391       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:45:52.385120       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:45:52.427186       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:45:52.485474       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:46:22.433268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:22.496038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:46:52.438789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:46:52.504482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:22.446493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:22.515370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 08:47:52.452536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 08:47:52.535721       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 08:52:17.008825       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 08:52:35.860282       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	
	
	==> kube-proxy [9d0a2f86b38f42fab057b3fea7994c15073ec1d05f3db97341f0fed0ad342cf9] <==
	I1101 08:45:24.962819       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:45:25.066839       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:45:25.068064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1101 08:45:25.073313       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:45:25.410848       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:45:25.410962       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:45:25.410991       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:45:25.477946       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:45:25.478244       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:45:25.478277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:45:25.484125       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:45:25.484405       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:45:25.491275       1 config.go:200] "Starting service config controller"
	I1101 08:45:25.491309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:45:25.494813       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:45:25.496161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:45:25.495379       1 config.go:309] "Starting node config controller"
	I1101 08:45:25.506423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:45:25.506433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:45:25.584706       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:45:25.592170       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:45:25.598016       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [80489befa62b8185c103a7d016a78a5924e4c5187536cb66142d1c5f8cc4a5b5] <==
	E1101 08:45:15.349464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:15.349542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:15.349728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:45:15.349881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:45:15.352076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:15.352119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:15.352139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:15.352358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:45:15.352409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:15.357367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:15.357513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:15.357652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:45:16.203110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:45:16.263373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:45:16.299073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:45:16.424658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:45:16.486112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:45:16.556670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:45:16.568573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:45:16.598275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:45:16.651957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:45:16.662617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:45:16.674245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:45:16.759792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:45:19.143863       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:53:08 addons-994396 kubelet[1497]: E1101 08:53:08.300655    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987188300153247  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:11 addons-994396 kubelet[1497]: E1101 08:53:11.972612    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"yakd\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/marcnuri/yakd:0.0.5@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624\\\": ErrImagePull: reading manifest sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624 in docker.io/marcnuri/yakd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="yakd-dashboard/yakd-dashboard-5ff678cb9-j8882" podUID="0077b05b-14cd-445f-9783-8883fbae27e5"
	Nov 01 08:53:18 addons-994396 kubelet[1497]: E1101 08:53:18.304728    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987198304317776  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:18 addons-994396 kubelet[1497]: E1101 08:53:18.304771    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987198304317776  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:18 addons-994396 kubelet[1497]: I1101 08:53:18.969759    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:53:19 addons-994396 kubelet[1497]: I1101 08:53:19.024363    1497 scope.go:117] "RemoveContainer" containerID="3a3b5a07c657e4ab6d035f6f6ccdd890c89817b915b34c233842eaafd57effd2"
	Nov 01 08:53:20 addons-994396 kubelet[1497]: I1101 08:53:20.969601    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-vssmp" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:53:26 addons-994396 kubelet[1497]: E1101 08:53:26.046634    1497 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:53:26 addons-994396 kubelet[1497]: E1101 08:53:26.046758    1497 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 08:53:26 addons-994396 kubelet[1497]: E1101 08:53:26.047158    1497 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(8623da74-791e-4fd6-a974-60ebca5738a7): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 08:53:26 addons-994396 kubelet[1497]: E1101 08:53:26.047212    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:53:26 addons-994396 kubelet[1497]: E1101 08:53:26.558506    1497 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="8623da74-791e-4fd6-a974-60ebca5738a7"
	Nov 01 08:53:27 addons-994396 kubelet[1497]: I1101 08:53:27.971611    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bzs78" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:53:28 addons-994396 kubelet[1497]: E1101 08:53:28.307367    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987208306866933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:28 addons-994396 kubelet[1497]: E1101 08:53:28.307392    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987208306866933  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:38 addons-994396 kubelet[1497]: E1101 08:53:38.310071    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987218309497540  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:38 addons-994396 kubelet[1497]: E1101 08:53:38.310112    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987218309497540  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:41 addons-994396 kubelet[1497]: E1101 08:53:41.306579    1497 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 08:53:41 addons-994396 kubelet[1497]: E1101 08:53:41.306697    1497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75cdadc5-e3ea-4aae-9002-6dca21e0f758-gcr-creds podName:75cdadc5-e3ea-4aae-9002-6dca21e0f758 nodeName:}" failed. No retries permitted until 2025-11-01 08:55:43.306673063 +0000 UTC m=+625.471102709 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/75cdadc5-e3ea-4aae-9002-6dca21e0f758-gcr-creds") pod "registry-creds-764b6fb674-xstzf" (UID: "75cdadc5-e3ea-4aae-9002-6dca21e0f758") : secret "registry-creds-gcr" not found
	Nov 01 08:53:48 addons-994396 kubelet[1497]: E1101 08:53:48.313103    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987228312596639  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:48 addons-994396 kubelet[1497]: E1101 08:53:48.313150    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987228312596639  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:58 addons-994396 kubelet[1497]: E1101 08:53:58.315787    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987238315220006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:53:58 addons-994396 kubelet[1497]: E1101 08:53:58.315873    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987238315220006  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:54:08 addons-994396 kubelet[1497]: E1101 08:54:08.318731    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761987248318199941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	Nov 01 08:54:08 addons-994396 kubelet[1497]: E1101 08:54:08.318765    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761987248318199941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:454585}  inodes_used:{value:166}}"
	
	
	==> storage-provisioner [2ad7748982f904bf89ac86d1b7be83acfe37cfe9d240db5a3d2236808b8910a3] <==
	W1101 08:53:49.987358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:51.991617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:51.997430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:54.001630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:54.007832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:56.012783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:56.021099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:58.025472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:53:58.032685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:00.036519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:00.044360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:02.047940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:02.055782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:04.059397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:04.064805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:06.068187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:06.074136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:08.077608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:08.086334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:10.090175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:10.098688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:12.104415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:12.111812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:14.118569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:54:14.127286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-994396 -n addons-994396
helpers_test.go:269: (dbg) Run:  kubectl --context addons-994396 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e yakd-dashboard-5ff678cb9-j8882
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Yakd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-994396 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e yakd-dashboard-5ff678cb9-j8882
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-994396 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e yakd-dashboard-5ff678cb9-j8882: exit status 1 (93.656351ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-994396/192.168.39.195
	Start Time:       Sat, 01 Nov 2025 08:52:44 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mngk2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mngk2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  91s                default-scheduler  Successfully assigned default/task-pv-pod to addons-994396
	  Warning  Failed     49s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s                kubelet            Error: ErrImagePull
	  Normal   BackOff    49s                kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     49s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    39s (x2 over 91s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65r97 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-65r97:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6ptqs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dmt9r" not found
	Error from server (NotFound): pods "registry-6b586f9694-b4ph6" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-xstzf" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e" not found
	Error from server (NotFound): pods "yakd-dashboard-5ff678cb9-j8882" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-994396 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-6ptqs ingress-nginx-admission-patch-dmt9r registry-6b586f9694-b4ph6 registry-creds-764b6fb674-xstzf helper-pod-create-pvc-2db794c4-2444-4d03-b933-772cf722902e yakd-dashboard-5ff678cb9-j8882: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 addons disable yakd --alsologtostderr -v=1: (1m53.458509597s)
--- FAIL: TestAddons/parallel/Yakd (236.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-854568 --alsologtostderr -v=1]
E1101 09:12:03.112718  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:16:35.403958  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-854568 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-854568 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-854568 --alsologtostderr -v=1] stderr:
I1101 09:11:44.515610  546647 out.go:360] Setting OutFile to fd 1 ...
I1101 09:11:44.515863  546647 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:11:44.515873  546647 out.go:374] Setting ErrFile to fd 2...
I1101 09:11:44.515877  546647 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:11:44.516067  546647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:11:44.516454  546647 mustload.go:66] Loading cluster: functional-854568
I1101 09:11:44.516823  546647 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:11:44.518657  546647 host.go:66] Checking if "functional-854568" exists ...
I1101 09:11:44.518855  546647 api_server.go:166] Checking apiserver status ...
I1101 09:11:44.518907  546647 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 09:11:44.520984  546647 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:11:44.521374  546647 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:11:44.521403  546647 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:11:44.521528  546647 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:11:44.617667  546647 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6855/cgroup
W1101 09:11:44.630557  546647 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6855/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1101 09:11:44.630629  546647 ssh_runner.go:195] Run: ls
I1101 09:11:44.635842  546647 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8441/healthz ...
I1101 09:11:44.640905  546647 api_server.go:279] https://192.168.39.129:8441/healthz returned 200:
ok
W1101 09:11:44.640946  546647 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1101 09:11:44.641108  546647 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:11:44.641120  546647 addons.go:70] Setting dashboard=true in profile "functional-854568"
I1101 09:11:44.641133  546647 addons.go:239] Setting addon dashboard=true in "functional-854568"
I1101 09:11:44.641159  546647 host.go:66] Checking if "functional-854568" exists ...
I1101 09:11:44.644695  546647 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1101 09:11:44.645985  546647 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1101 09:11:44.647214  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1101 09:11:44.647229  546647 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1101 09:11:44.649726  546647 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:11:44.650064  546647 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:11:44.650088  546647 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:11:44.650217  546647 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:11:44.749652  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1101 09:11:44.749681  546647 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1101 09:11:44.771832  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1101 09:11:44.771867  546647 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1101 09:11:44.795736  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1101 09:11:44.795765  546647 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1101 09:11:44.817720  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1101 09:11:44.817760  546647 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1101 09:11:44.839337  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1101 09:11:44.839387  546647 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1101 09:11:44.860943  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1101 09:11:44.860981  546647 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1101 09:11:44.889811  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1101 09:11:44.889837  546647 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1101 09:11:44.914043  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1101 09:11:44.914075  546647 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1101 09:11:44.937283  546647 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1101 09:11:44.937321  546647 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1101 09:11:44.959648  546647 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1101 09:11:45.738987  546647 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-854568 addons enable metrics-server

                                                
                                                
I1101 09:11:45.740654  546647 addons.go:202] Writing out "functional-854568" config to set dashboard=true...
W1101 09:11:45.741044  546647 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1101 09:11:45.742168  546647 kapi.go:59] client config for functional-854568: &rest.Config{Host:"https://192.168.39.129:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 09:11:45.742885  546647 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 09:11:45.742929  546647 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 09:11:45.742939  546647 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 09:11:45.742950  546647 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 09:11:45.742957  546647 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 09:11:45.757572  546647 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  9dc6af73-821a-4a27-af79-d11cee0530f2 822 0 2025-11-01 09:11:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-01 09:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.47.106,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.47.106],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1101 09:11:45.757783  546647 out.go:285] * Launching proxy ...
* Launching proxy ...
I1101 09:11:45.757913  546647 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-854568 proxy --port 36195]
I1101 09:11:45.758460  546647 dashboard.go:159] Waiting for kubectl to output host:port ...
I1101 09:11:45.808646  546647 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1101 09:11:45.808686  546647 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1101 09:11:45.818011  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0edf7ab-9ef6-4ea3-a7ce-f5f5bdfedc44] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000bb94c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1101 09:11:45.818104  546647 retry.go:31] will retry after 106.977µs: Temporary Error: unexpected response code: 503
I1101 09:11:45.822455  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e5f6f999-a498-4c8c-b6ec-3f4993b67864] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0009346c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aea00 TLS:<nil>}
I1101 09:11:45.822527  546647 retry.go:31] will retry after 148.921µs: Temporary Error: unexpected response code: 503
I1101 09:11:45.826546  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d3800e7d-520a-4446-b568-23fca13dcfbb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0009a14c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00061dcc0 TLS:<nil>}
I1101 09:11:45.826590  546647 retry.go:31] will retry after 184.333µs: Temporary Error: unexpected response code: 503
I1101 09:11:45.830781  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[453164ad-f675-44ed-89e1-53e1fd0b8a3f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000934800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206780 TLS:<nil>}
I1101 09:11:45.830847  546647 retry.go:31] will retry after 421.572µs: Temporary Error: unexpected response code: 503
I1101 09:11:45.834589  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54492918-9c03-459e-b9a0-c8096b60be38] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0009a1e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00061de00 TLS:<nil>}
I1101 09:11:45.834657  546647 retry.go:31] will retry after 588.782µs: Temporary Error: unexpected response code: 503
I1101 09:11:45.838376  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7b3bc56c-eac7-4289-b219-32e776d030da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000934900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002068c0 TLS:<nil>}
I1101 09:11:45.838430  546647 retry.go:31] will retry after 1.065399ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.842533  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[55cce3d3-7b16-4ccd-ab98-16a8e2ac6c6e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0009349c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2140 TLS:<nil>}
I1101 09:11:45.842589  546647 retry.go:31] will retry after 1.560192ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.847681  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[653501d2-f28d-41cc-a809-b0e3da835413] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0016fa000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2280 TLS:<nil>}
I1101 09:11:45.847751  546647 retry.go:31] will retry after 1.812861ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.852874  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e6a6ac1-17ed-41fe-a89a-6d4d41e4d495] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000934ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206b40 TLS:<nil>}
I1101 09:11:45.852942  546647 retry.go:31] will retry after 2.610943ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.858705  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e21226e5-ef69-4dbb-a8ed-9aba5335bcf6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0016fa100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b23c0 TLS:<nil>}
I1101 09:11:45.858770  546647 retry.go:31] will retry after 2.372708ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.864447  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d03d42c5-ea2f-477e-839e-2c0832c3c646] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000bb9600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206c80 TLS:<nil>}
I1101 09:11:45.864503  546647 retry.go:31] will retry after 4.881431ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.874613  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[88ab8ba6-2b11-445c-b95a-8b939145bff1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0016fa1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aedc0 TLS:<nil>}
I1101 09:11:45.874727  546647 retry.go:31] will retry after 6.167065ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.884701  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[54a76244-e8b7-4db0-8a38-cef3bdb8a029] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000934ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206dc0 TLS:<nil>}
I1101 09:11:45.884767  546647 retry.go:31] will retry after 19.156929ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.913763  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[822a5db1-833d-4799-9022-bd61d5248c3a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc0016fa2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2500 TLS:<nil>}
I1101 09:11:45.913852  546647 retry.go:31] will retry after 24.73389ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.948784  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00652bd1-9e06-4110-af33-9c6eb3b7d77e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc001670040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206f00 TLS:<nil>}
I1101 09:11:45.948883  546647 retry.go:31] will retry after 18.721292ms: Temporary Error: unexpected response code: 503
I1101 09:11:45.972319  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8749b638-a5d7-479e-aa97-7cdce35c77e3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:45 GMT]] Body:0xc000bb9780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2640 TLS:<nil>}
I1101 09:11:45.972416  546647 retry.go:31] will retry after 48.888181ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.026549  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5e0e666a-aec4-4bc5-a278-dcbc30db9191] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc001670140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aef00 TLS:<nil>}
I1101 09:11:46.026647  546647 retry.go:31] will retry after 34.211999ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.068917  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[194ba6f8-bac3-46d7-a452-d875fb6d5a35] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc000bb9880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2780 TLS:<nil>}
I1101 09:11:46.069019  546647 retry.go:31] will retry after 86.588348ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.161222  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9f99df54-8caa-4687-86dd-79fa8e20d857] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc000bb9940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af180 TLS:<nil>}
I1101 09:11:46.161309  546647 retry.go:31] will retry after 169.85185ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.334992  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4b4ed0b6-f62b-4bce-a7cd-7dccc62d0251] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc0016fa400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af2c0 TLS:<nil>}
I1101 09:11:46.335081  546647 retry.go:31] will retry after 128.245567ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.467821  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[076f993b-78db-4174-b54a-b99dc8d826da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc001670240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207040 TLS:<nil>}
I1101 09:11:46.467891  546647 retry.go:31] will retry after 308.511527ms: Temporary Error: unexpected response code: 503
I1101 09:11:46.781316  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[abdfb8f4-2e19-4bed-b82a-0a3533cba119] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:46 GMT]] Body:0xc000bb9a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b28c0 TLS:<nil>}
I1101 09:11:46.781392  546647 retry.go:31] will retry after 632.660522ms: Temporary Error: unexpected response code: 503
I1101 09:11:47.418289  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4ae10a9-1208-4ec7-8b1c-8428efcaa606] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:47 GMT]] Body:0xc000bb9b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af400 TLS:<nil>}
I1101 09:11:47.418396  546647 retry.go:31] will retry after 757.277946ms: Temporary Error: unexpected response code: 503
I1101 09:11:48.180885  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ade6e49-08b5-4aec-b780-49ccfef35dbb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:48 GMT]] Body:0xc0016fa540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af540 TLS:<nil>}
I1101 09:11:48.180973  546647 retry.go:31] will retry after 1.485121622s: Temporary Error: unexpected response code: 503
I1101 09:11:49.671697  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2542d82c-341f-4586-9480-0a54f2e70aae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:49 GMT]] Body:0xc000bb9c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207180 TLS:<nil>}
I1101 09:11:49.671770  546647 retry.go:31] will retry after 1.408319079s: Temporary Error: unexpected response code: 503
I1101 09:11:51.084511  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[920628f8-7251-4c5c-8ecc-457a0a17e0f0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:51 GMT]] Body:0xc0016fa640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af680 TLS:<nil>}
I1101 09:11:51.084577  546647 retry.go:31] will retry after 3.683629212s: Temporary Error: unexpected response code: 503
I1101 09:11:54.772726  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15ed2be2-608b-4f37-be03-eafee3ce68cd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:11:54 GMT]] Body:0xc001670380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002072c0 TLS:<nil>}
I1101 09:11:54.772818  546647 retry.go:31] will retry after 5.322298558s: Temporary Error: unexpected response code: 503
I1101 09:12:00.098941  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[349dee8b-0a88-4d93-8274-dbe9b973a624] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:12:00 GMT]] Body:0xc001670400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1101 09:12:00.099030  546647 retry.go:31] will retry after 8.239542589s: Temporary Error: unexpected response code: 503
I1101 09:12:08.343955  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b56e8567-0370-4524-9b43-f7974e632f20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:12:08 GMT]] Body:0xc000bb9cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1101 09:12:08.344032  546647 retry.go:31] will retry after 5.322970312s: Temporary Error: unexpected response code: 503
I1101 09:12:13.672304  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4be2774c-a878-4ccf-a9b6-7e9a502165b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:12:13 GMT]] Body:0xc001670480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af7c0 TLS:<nil>}
I1101 09:12:13.672386  546647 retry.go:31] will retry after 16.335088698s: Temporary Error: unexpected response code: 503
I1101 09:12:30.011653  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ee60530-49e5-4969-a1ca-e545a30b8d05] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:12:30 GMT]] Body:0xc001670540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b2a00 TLS:<nil>}
I1101 09:12:30.011725  546647 retry.go:31] will retry after 17.593967147s: Temporary Error: unexpected response code: 503
I1101 09:12:47.610128  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6e2249f-5a77-4f60-aa4f-6064290ce9b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:12:47 GMT]] Body:0xc0016705c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1101 09:12:47.610199  546647 retry.go:31] will retry after 29.954641799s: Temporary Error: unexpected response code: 503
I1101 09:13:17.569749  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a4c00c9f-a1df-4764-a591-40240366ebb9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:13:17 GMT]] Body:0xc000bb9dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002077c0 TLS:<nil>}
I1101 09:13:17.569831  546647 retry.go:31] will retry after 22.180069135s: Temporary Error: unexpected response code: 503
I1101 09:13:39.753369  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[887ebda2-d99b-4071-afb7-718f2a696801] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:13:39 GMT]] Body:0xc000bb9e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005af900 TLS:<nil>}
I1101 09:13:39.753457  546647 retry.go:31] will retry after 1m4.084768747s: Temporary Error: unexpected response code: 503
I1101 09:14:43.846487  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9f8c8f5a-ad89-4ff7-a35c-57f56b600106] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:14:43 GMT]] Body:0xc001670040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ae140 TLS:<nil>}
I1101 09:14:43.846579  546647 retry.go:31] will retry after 1m22.727416692s: Temporary Error: unexpected response code: 503
I1101 09:16:06.578435  546647 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1714cbf7-4039-498a-ae1a-aff6632ccabf] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 09:16:06 GMT]] Body:0xc000bb8140 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005ae280 TLS:<nil>}
I1101 09:16:06.578532  546647 retry.go:31] will retry after 1m1.477173277s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-854568 -n functional-854568
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 logs -n 25: (1.51337565s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount3 --alsologtostderr -v=1                                           │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ ssh       │ functional-854568 ssh findmnt -T /mount1                                                                                                                     │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ ssh       │ functional-854568 ssh findmnt -T /mount2                                                                                                                     │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ ssh       │ functional-854568 ssh findmnt -T /mount3                                                                                                                     │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ mount     │ -p functional-854568 --kill=true                                                                                                                             │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ ssh       │ functional-854568 ssh echo hello                                                                                                                             │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ ssh       │ functional-854568 ssh cat /etc/hostname                                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ start     │ -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ start     │ -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ start     │ -p functional-854568 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ ssh       │ functional-854568 ssh sudo systemctl is-active docker                                                                                                        │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ ssh       │ functional-854568 ssh sudo systemctl is-active containerd                                                                                                    │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ image     │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image save kicbase/echo-server:functional-854568 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image rm kicbase/echo-server:functional-854568 --alsologtostderr                                                                           │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image     │ functional-854568 image save --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ dashboard │ --url --port 36195 -p functional-854568 --alsologtostderr -v=1                                                                                               │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:11:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:11:38.234335  546374 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:11:38.234641  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234652  546374 out.go:374] Setting ErrFile to fd 2...
	I1101 09:11:38.234660  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234890  546374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:11:38.235395  546374 out.go:368] Setting JSON to false
	I1101 09:11:38.236295  546374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":64420,"bootTime":1761923878,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:11:38.236391  546374 start.go:143] virtualization: kvm guest
	I1101 09:11:38.238286  546374 out.go:179] * [functional-854568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:11:38.239579  546374 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:11:38.239605  546374 notify.go:221] Checking for updates...
	I1101 09:11:38.241694  546374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:11:38.243040  546374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 09:11:38.244334  546374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 09:11:38.245586  546374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:11:38.246693  546374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:11:38.248214  546374 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:11:38.248667  546374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:11:38.278963  546374 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:11:38.280328  546374 start.go:309] selected driver: kvm2
	I1101 09:11:38.280347  546374 start.go:930] validating driver "kvm2" against &{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.280465  546374 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:11:38.281519  546374 cni.go:84] Creating CNI manager for ""
	I1101 09:11:38.281589  546374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:11:38.281653  546374 start.go:353] cluster config:
	{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.283164  546374 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.324475492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=603f3121-19a1-4318-aebf-facae8b77b29 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.326081719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f8323c6-8e8c-4b9a-8edb-3d381a867536 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.327277444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988605327145337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177580,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f8323c6-8e8c-4b9a-8edb-3d381a867536 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.328454852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e074f87b-3384-4744-a537-918faab2e7f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.328869072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e074f87b-3384-4744-a537-918faab2e7f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.329632198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e074f87b-3384-4744-a537-918faab2e7f2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.366530446Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4f8b766-9870-4bb5-8e70-152aec22eb32 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.366599701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4f8b766-9870-4bb5-8e70-152aec22eb32 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.368898203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f498a7a-0898-4f8d-8fe6-f872bc10d21c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.369884648Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988605369860988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177580,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f498a7a-0898-4f8d-8fe6-f872bc10d21c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.370666052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d487b9c-d232-442f-9b66-eff755b19d4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.370774155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d487b9c-d232-442f-9b66-eff755b19d4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.371225537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d487b9c-d232-442f-9b66-eff755b19d4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.411019099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4400628-0104-43c6-849a-c056fe862c3b name=/runtime.v1.RuntimeService/Version
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.411390513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4400628-0104-43c6-849a-c056fe862c3b name=/runtime.v1.RuntimeService/Version
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.412806508Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1eb62da5-c048-428b-86e4-18ff65b4014d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.413206796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd81fbd8-b10e-4279-b2a6-a3c0556ccb90 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.413252590Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dcd516d02b6d5d48f74cddfad8ef22754737425e1ecd89aa19c01f265bf6e8a0,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-mk8vc,Uid:02ab3a50-c383-42ee-8979-1a3ef29ad317,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988305927638830,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-mk8vc,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 02ab3a50-c383-42ee-8979-1a3ef29ad317,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:11:45.585667308Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b5a36281b928fad9b01c9fd11c75fa5ef658028a80f4abcc3893b0d7e6d8d1f,Metadata:&PodSandboxMetadata{Name
:dashboard-metrics-scraper-77bf4d6c4c-m4r9g,Uid:b35ccd8f-dbbd-4df5-a652-9d21e07e5964,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988305886756466,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-m4r9g,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b35ccd8f-dbbd-4df5-a652-9d21e07e5964,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:11:45.568396780Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:7e79234a6491cb269a9f392b76a6898202e6b6a246dd849470db11c04ecbd04a,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-dqd4j,Uid:dfb32fdc-7568-4c82-ba99-a7def15513c9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988269423375075,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.po
d.name: mysql-5bb876957f-dqd4j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dfb32fdc-7568-4c82-ba99-a7def15513c9,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:11:09.105326141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6ae71b67dfe5c5cc9593b6f389b0f51347caf7382c6910a26b2130bab241405,Metadata:&PodSandboxMetadata{Name:sp-pod,Uid:594fa138-93b5-43b5-b787-97f37ee7079c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988263575227778,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 594fa138-93b5-43b5-b787-97f37ee7079c,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containe
rs\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-01T09:11:03.256797992Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:249b33c1-c442-4698-8c37-9d6af53ed2fc,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761988258915739817,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:10:58.591257622Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f433aa970ebd584d8eb8fcf8ef51a108fcfd8fded0a1543df5d083
5081e763df,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-pvt5m,Uid:dc5ce2a1-fb71-4117-9dec-aa7f6043b738,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988258628472225,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-pvt5m,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc5ce2a1-fb71-4117-9dec-aa7f6043b738,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:10:58.307398759Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-8fqgj,Uid:645dc979-5e33-4017-b9c6-399736482d7d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988257934528120,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.n
amespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:10:57.610518038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-854568,Uid:567794742ee267e0898306a2bfdc060c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761988231124334076,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.129:8441,kubernetes.io/config.hash: 567794742ee267e0898306a2bfdc060c,kubernetes.io/config.seen: 2025-11-01T09:10:30.456016208Z,kubernetes.
io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-626v2,Uid:534f1588-2719-4435-9399-fcf4dff390de,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761988227405628437,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:09:50.594019101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-854568,Uid:3b858069348de84ce0334761afe76b9b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761988227151173948,Labels:map[string]string{component: kube-schedule
r,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3b858069348de84ce0334761afe76b9b,kubernetes.io/config.seen: 2025-11-01T09:09:45.600840807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e932432e-8369-4ac7-be62-15697906b114,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761988226933692785,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied
-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T09:09:50.594030151Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-854568,Uid:e7204fc2807c91c2baeb21d904e5b3e8,Namespace:kube-system,Attempt:2,},State:SA
NDBOX_READY,CreatedAt:1761988226879887938,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e7204fc2807c91c2baeb21d904e5b3e8,kubernetes.io/config.seen: 2025-11-01T09:09:45.600839956Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&PodSandboxMetadata{Name:etcd-functional-854568,Uid:6a10c03a29f4d4d9c61649b9a5d64941,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761988226874600634,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,tier: control-plane,},An
notations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.129:2379,kubernetes.io/config.hash: 6a10c03a29f4d4d9c61649b9a5d64941,kubernetes.io/config.seen: 2025-11-01T09:09:45.600841618Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&PodSandboxMetadata{Name:kube-proxy-p8qv6,Uid:d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761988226845898781,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:09:50.594027946Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:952c34f1f33f41404348bdffb010de32512512f46f9a22c
5919b2e55aadaad34,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-626v2,Uid:534f1588-2719-4435-9399-fcf4dff390de,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761988171394171086,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T09:08:43.890678556Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-854568,Uid:3b858069348de84ce0334761afe76b9b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1761988170735690460,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3b858069348de84ce0334761afe76b9b,kubernetes.io/config.seen: 2025-11-01T09:08:38.286620836Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1eb62da5-c048-428b-86e4-18ff65b4014d name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.415037754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1068d256-5760-4fc4-948d-09ab469e9327 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.415169695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1068d256-5760-4fc4-948d-09ab469e9327 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.415482968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1068d256-5760-4fc4-948d-09ab469e9327 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.415704372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988605415683647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177580,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd81fbd8-b10e-4279-b2a6-a3c0556ccb90 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.416310857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62b84e64-d41d-4a89-96dd-7aa7964eee50 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.416398772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62b84e64-d41d-4a89-96dd-7aa7964eee50 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:16:45 functional-854568 crio[5564]: time="2025-11-01 09:16:45.416647088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62b84e64-d41d-4a89-96dd-7aa7964eee50 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e1db797037a3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     5 minutes ago       Exited              mount-munger              0                   d67ad6ff7673b       busybox-mount
	ad6d9bcb47964       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   5 minutes ago       Running             echo-server               0                   0e5dbb626ffaf       hello-node-connect-7d85dfc575-8fqgj
	b15fd989610bc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        6 minutes ago       Running             kube-proxy                4                   ab5e8ba1a8d18       kube-proxy-p8qv6
	27e849fb394fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Running             storage-provisioner       4                   42ddeb7ee9b66       storage-provisioner
	0b2d3d715d65d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        6 minutes ago       Running             kube-apiserver            0                   21ec93d6e0dcf       kube-apiserver-functional-854568
	7e1306ed5ca1d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        6 minutes ago       Running             kube-controller-manager   4                   70138226f92eb       kube-controller-manager-functional-854568
	4dcaae31b320d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Running             etcd                      4                   61712013dba87       etcd-functional-854568
	0451d0b1d08ba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        6 minutes ago       Running             coredns                   2                   1ee40d241e597       coredns-66bc5c9577-626v2
	0bc1398379b4a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        6 minutes ago       Running             kube-scheduler            3                   58f8c972b4dbe       kube-scheduler-functional-854568
	d109feadf1871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Exited              storage-provisioner       3                   42ddeb7ee9b66       storage-provisioner
	de92f64e8c564       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        6 minutes ago       Exited              kube-controller-manager   3                   70138226f92eb       kube-controller-manager-functional-854568
	806eea7f9cd39       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Exited              etcd                      3                   61712013dba87       etcd-functional-854568
	7e71039fa4c92       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        6 minutes ago       Exited              kube-proxy                3                   ab5e8ba1a8d18       kube-proxy-p8qv6
	0fbfacd7f2a11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        6 minutes ago       Exited              kube-scheduler            2                   ff3380e3e50ee       kube-scheduler-functional-854568
	5cd8344d8c832       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        7 minutes ago       Exited              coredns                   1                   952c34f1f33f4       coredns-66bc5c9577-626v2
	
	
	==> coredns [0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40232 - 22482 "HINFO IN 5854806722054425578.3190548008883538820. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030681733s
	
	
	==> coredns [5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50651 - 59818 "HINFO IN 8748826513468128324.7719950190033398852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018360541s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-854568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-854568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=functional-854568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_08_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:08:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-854568
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:16:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:15:20 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:15:20 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:15:20 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:15:20 +0000   Sat, 01 Nov 2025 09:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    functional-854568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdac547e78d548549cd4406c550707a8
	  System UUID:                cdac547e-78d5-4854-9cd4-406c550707a8
	  Boot ID:                    4fee0e31-2a9b-4ffb-9a8e-d63cba9bf994
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-pvt5m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  default                     hello-node-connect-7d85dfc575-8fqgj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  default                     mysql-5bb876957f-dqd4j                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m36s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kube-system                 coredns-66bc5c9577-626v2                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m2s
	  kube-system                 etcd-functional-854568                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m7s
	  kube-system                 kube-apiserver-functional-854568              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-controller-manager-functional-854568     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-proxy-p8qv6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 kube-scheduler-functional-854568              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m4r9g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mk8vc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m59s                  kube-proxy       
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  Starting                 6m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m7s                   kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m7s                   kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m7s                   kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m7s                   kubelet          Starting kubelet.
	  Normal  NodeReady                8m6s                   kubelet          Node functional-854568 status is now: NodeReady
	  Normal  RegisteredNode           8m3s                   node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  NodeHasNoDiskPressure    7m (x8 over 7m)        kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m (x8 over 7m)        kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m (x7 over 7m)        kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m53s                  node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m15s)  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m8s                   node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	
	
	==> dmesg <==
	[  +0.000042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001865] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187392] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090698] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096999] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135600] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.667190] kauditd_printk_skb: 237 callbacks suppressed
	[Nov 1 09:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.107780] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.934436] kauditd_printk_skb: 338 callbacks suppressed
	[  +5.546896] kauditd_printk_skb: 75 callbacks suppressed
	[Nov 1 09:10] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.111141] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.580326] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.631439] kauditd_printk_skb: 314 callbacks suppressed
	[  +1.514979] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.072142] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 1 09:11] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.404869] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.565921] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.688476] kauditd_printk_skb: 31 callbacks suppressed
	[Nov 1 09:12] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53] <==
	{"level":"warn","ts":"2025-11-01T09:10:32.904368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.917807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.926159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.934181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.943838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.957578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.967277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.972052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.981207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.987358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.996312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.003094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.014585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.018701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.027009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.039030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.052484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.056306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.069834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.086362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.098344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.101563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.110131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.119029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.163128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	
	
	==> etcd [806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493] <==
	{"level":"info","ts":"2025-11-01T09:10:28.261021Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","recovered-remote-peer-id":"245a8df1c58de0e1","recovered-remote-peer-urls":["https://192.168.39.129:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.261035Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.261047Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-01T09:10:28.261128Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-01T09:10:28.261265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-01T09:10:28.261308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"245a8df1c58de0e1 became follower at term 3"}
	{"level":"info","ts":"2025-11-01T09:10:28.261320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 245a8df1c58de0e1 [peers: [], term: 3, commit: 566, applied: 0, lastindex: 566, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-01T09:10:28.268634Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-01T09:10:28.299822Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":520}
	{"level":"info","ts":"2025-11-01T09:10:28.319741Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-01T09:10:28.320231Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"245a8df1c58de0e1","timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:10:28.320514Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"245a8df1c58de0e1"}
	{"level":"info","ts":"2025-11-01T09:10:28.320587Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.6.4","cluster-id":"a2af9788ad7a361f","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.320895Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"245a8df1c58de0e1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321037Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321065Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321072Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.322905Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
	{"level":"info","ts":"2025-11-01T09:10:28.324060Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.324182Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.327793Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:10:28.333565Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:10:28.333610Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:10:28.334221Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2025-11-01T09:10:28.334264Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.129:2380"}
	
	
	==> kernel <==
	 09:16:45 up 8 min,  0 users,  load average: 0.16, 0.37, 0.28
	Linux functional-854568 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6] <==
	I1101 09:10:34.016511       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:10:34.016606       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:10:34.016636       1 policy_source.go:240] refreshing policies
	I1101 09:10:34.017478       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:10:34.017511       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:10:34.017517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:10:34.017522       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:10:34.017636       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:10:34.032707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:10:34.521559       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:10:34.719912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:10:35.759836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:10:35.810339       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:10:35.835705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:10:35.847097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:10:37.471420       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:10:37.521759       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:10:53.026913       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.16.89"}
	I1101 09:10:57.543323       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:10:57.668451       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.190.164"}
	I1101 09:10:58.399714       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.3.18"}
	I1101 09:11:09.020220       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.182.209"}
	I1101 09:11:45.352100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:11:45.701480       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.47.106"}
	I1101 09:11:45.721847       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.239.202"}
	
	
	==> kube-controller-manager [7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116] <==
	I1101 09:10:37.273009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:10:37.274250       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:10:37.272871       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:10:37.277632       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.280560       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:10:37.280639       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:10:37.282005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.282030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:10:37.282037       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:10:37.282491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.286271       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:10:37.288626       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:10:37.291296       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:10:37.292027       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:10:37.294812       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:10:37.301810       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:10:37.301873       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:10:37.308193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.314539       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:10:37.319476       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 09:11:45.470473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.490084       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.497121       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.519845       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.526166       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [de92f64e8c564b2e15a82533a32166c758aeafe35bbc57469519bb24cd65be57] <==
	
	
	==> kube-proxy [7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1] <==
	I1101 09:10:27.847135       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:10:27.940781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:10:27.943378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-854568&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d] <==
	I1101 09:10:35.196540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:10:35.297438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:10:35.297465       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.129"]
	E1101 09:10:35.297669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:10:35.337819       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:10:35.337886       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:10:35.337907       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:10:35.348531       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:10:35.348822       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:10:35.348835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:10:35.351142       1 config.go:309] "Starting node config controller"
	I1101 09:10:35.351171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:10:35.351178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:10:35.362216       1 config.go:200] "Starting service config controller"
	I1101 09:10:35.362396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:10:35.362429       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:10:35.363077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:10:35.362692       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:10:35.363316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:10:35.363374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:10:35.463352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:10:35.463497       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00] <==
	E1101 09:10:30.940568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.129:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:31.038150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.129:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:31.048810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:10:31.122051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.129:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:10:31.130031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.129:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:31.179610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:31.201604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:10:33.833731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:10:33.833803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:33.833850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:10:33.833894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:10:33.834511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:10:33.834805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:10:33.835005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:10:33.835221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:10:33.835472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:33.835690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:33.835916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:10:33.836419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:10:33.836534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:33.836754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:10:33.838399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:10:33.838448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:10:33.868441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:10:37.999449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c] <==
	I1101 09:09:47.905692       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:09:49.393112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:09:49.393155       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:09:49.393166       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:09:49.393171       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:09:49.503208       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:09:49.503248       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:09:49.507313       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:09:49.507909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:09:49.608223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:10:11.028026       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:10:11.028140       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:10:11.028163       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:10:11.028183       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:10:11.028202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:15:38 functional-854568 kubelet[6640]: E1101 09:15:38.340684    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-dqd4j_default(dfb32fdc-7568-4c82-ba99-a7def15513c9): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:15:38 functional-854568 kubelet[6640]: E1101 09:15:38.340719    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dqd4j" podUID="dfb32fdc-7568-4c82-ba99-a7def15513c9"
	Nov 01 09:15:40 functional-854568 kubelet[6640]: E1101 09:15:40.723992    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988540723531937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:15:40 functional-854568 kubelet[6640]: E1101 09:15:40.724034    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988540723531937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:15:50 functional-854568 kubelet[6640]: E1101 09:15:50.726003    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988550725535937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:15:50 functional-854568 kubelet[6640]: E1101 09:15:50.726045    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988550725535937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:15:52 functional-854568 kubelet[6640]: E1101 09:15:52.485146    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-dqd4j" podUID="dfb32fdc-7568-4c82-ba99-a7def15513c9"
	Nov 01 09:16:00 functional-854568 kubelet[6640]: E1101 09:16:00.728355    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988560727833770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:00 functional-854568 kubelet[6640]: E1101 09:16:00.728387    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988560727833770  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424133    6640 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424197    6640 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424456    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-pvt5m_default(dc5ce2a1-fb71-4117-9dec-aa7f6043b738): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424492    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:10 functional-854568 kubelet[6640]: E1101 09:16:10.730000    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988570729653589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:10 functional-854568 kubelet[6640]: E1101 09:16:10.730025    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988570729653589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:20 functional-854568 kubelet[6640]: E1101 09:16:20.732191    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988580731773575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:20 functional-854568 kubelet[6640]: E1101 09:16:20.732239    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988580731773575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:23 functional-854568 kubelet[6640]: E1101 09:16:23.479278    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.616364    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3b858069348de84ce0334761afe76b9b/crio-ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Error finding container ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Status 404 returned error can't find the container with id ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.616677    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod534f1588-2719-4435-9399-fcf4dff390de/crio-952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Error finding container 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Status 404 returned error can't find the container with id 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.734378    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988590733897528  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.734402    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988590733897528  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:35 functional-854568 kubelet[6640]: E1101 09:16:35.478577    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:40 functional-854568 kubelet[6640]: E1101 09:16:40.737529    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988600737124638  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:40 functional-854568 kubelet[6640]: E1101 09:16:40.737556    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988600737124638  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df] <==
	W1101 09:16:20.387727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:22.390715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:22.401488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:24.405908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:24.410727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:26.413787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:26.422716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:28.426253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:28.431839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:30.436002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:30.450301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:32.454409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:32.459334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:34.462694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:34.467778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:36.471783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:36.478463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:38.484808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:38.495458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:40.499041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:40.504258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:42.509125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:42.518007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:44.521690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:44.527845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334] <==
	I1101 09:10:28.204289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:10:28.209290       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
helpers_test.go:269: (dbg) Run:  kubectl --context functional-854568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1 (95.695618ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:11:31 +0000
	      Finished:     Sat, 01 Nov 2025 09:11:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fvp2s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fvp2s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m48s  default-scheduler  Successfully assigned default/busybox-mount to functional-854568
	  Normal  Pulling    5m47s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.358s (32.29s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m15s  kubelet            Created container: mount-munger
	  Normal  Started    5m15s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-pvt5m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djsds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djsds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m48s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-pvt5m to functional-854568
	  Warning  Failed     3m39s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m16s (x3 over 5m47s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     38s (x2 over 5m17s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     38s (x3 over 5m17s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x4 over 5m16s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x4 over 5m16s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-dqd4j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7rfc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c7rfc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m37s                default-scheduler  Successfully assigned default/mysql-5bb876957f-dqd4j to functional-854568
	  Warning  Failed     68s (x2 over 4m15s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     68s (x2 over 4m15s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    54s (x2 over 4m14s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     54s (x2 over 4m14s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    41s (x3 over 5m37s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:03 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bblfx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bblfx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m43s                default-scheduler  Successfully assigned default/sp-pod to functional-854568
	  Warning  Failed     4m45s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     98s (x2 over 4m45s)  kubelet            Error: ErrImagePull
	  Warning  Failed     98s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    87s (x2 over 4m44s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     87s (x2 over 4m44s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    72s (x3 over 5m43s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m4r9g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mk8vc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e932432e-8369-4ac7-be62-15697906b114] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003883048s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-854568 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-854568 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-854568 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-854568 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:11:03.267684  534515 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [594fa138-93b5-43b5-b787-97f37ee7079c] Pending
helpers_test.go:352: "sp-pod" [594fa138-93b5-43b5-b787-97f37ee7079c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-01 09:17:03.498128616 +0000 UTC m=+1958.524805852
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-854568 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-854568 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-854568/192.168.39.129
Start Time:       Sat, 01 Nov 2025 09:11:03 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bblfx (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-bblfx:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-854568
Warning  Failed     5m2s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     115s (x2 over 5m2s)  kubelet            Error: ErrImagePull
Warning  Failed     115s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    104s (x2 over 5m1s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     104s (x2 over 5m1s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    89s (x3 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-854568 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-854568 logs sp-pod -n default: exit status 1 (74.628304ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-854568 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-854568 -n functional-854568
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 logs -n 25: (1.527008228s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-854568 ssh sudo systemctl is-active docker                                                                                                        │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ ssh            │ functional-854568 ssh sudo systemctl is-active containerd                                                                                                    │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ image          │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image save kicbase/echo-server:functional-854568 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image rm kicbase/echo-server:functional-854568 --alsologtostderr                                                                           │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image save --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ dashboard      │ --url --port 36195 -p functional-854568 --alsologtostderr -v=1                                                                                               │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format short --alsologtostderr                                                                                                  │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format yaml --alsologtostderr                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh            │ functional-854568 ssh pgrep buildkitd                                                                                                                        │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ image          │ functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr                                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format json --alsologtostderr                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format table --alsologtostderr                                                                                                  │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:11:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:11:38.234335  546374 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:11:38.234641  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234652  546374 out.go:374] Setting ErrFile to fd 2...
	I1101 09:11:38.234660  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234890  546374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:11:38.235395  546374 out.go:368] Setting JSON to false
	I1101 09:11:38.236295  546374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":64420,"bootTime":1761923878,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:11:38.236391  546374 start.go:143] virtualization: kvm guest
	I1101 09:11:38.238286  546374 out.go:179] * [functional-854568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:11:38.239579  546374 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:11:38.239605  546374 notify.go:221] Checking for updates...
	I1101 09:11:38.241694  546374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:11:38.243040  546374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 09:11:38.244334  546374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 09:11:38.245586  546374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:11:38.246693  546374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:11:38.248214  546374 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:11:38.248667  546374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:11:38.278963  546374 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:11:38.280328  546374 start.go:309] selected driver: kvm2
	I1101 09:11:38.280347  546374 start.go:930] validating driver "kvm2" against &{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.280465  546374 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:11:38.281519  546374 cni.go:84] Creating CNI manager for ""
	I1101 09:11:38.281589  546374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:11:38.281653  546374 start.go:353] cluster config:
	{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.283164  546374 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.313254387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988624313229290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bd160bd-1ab9-4775-9d27-306f8f8ff140 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.313783740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd859c57-f8f8-44db-94c0-e9d509abf0ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.313881400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd859c57-f8f8-44db-94c0-e9d509abf0ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.314208478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd859c57-f8f8-44db-94c0-e9d509abf0ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.360193638Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8333b203-8056-4736-a581-9337eb9083b9 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.360263875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8333b203-8056-4736-a581-9337eb9083b9 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.361362392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40668294-15fb-471f-a61d-508d0831074b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.362143461Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988624362119811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40668294-15fb-471f-a61d-508d0831074b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.362841932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a638364-0053-4817-8f6d-7c7b10015d47 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.362919129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a638364-0053-4817-8f6d-7c7b10015d47 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.363271077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a638364-0053-4817-8f6d-7c7b10015d47 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.401119626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef38d0d4-0571-427a-b11b-1d3b506119bf name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.401210189Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef38d0d4-0571-427a-b11b-1d3b506119bf name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.402772114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1aa0b8a2-eebd-47a2-b19d-a8dc35e99622 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.403529858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988624403506250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1aa0b8a2-eebd-47a2-b19d-a8dc35e99622 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.404803762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f257422e-b3e5-48e6-8266-720f680875db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.405144243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f257422e-b3e5-48e6-8266-720f680875db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.407732663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f257422e-b3e5-48e6-8266-720f680875db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.443796783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c0c47ad-b1b5-44bd-96d7-6746edc20ac4 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.443902501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c0c47ad-b1b5-44bd-96d7-6746edc20ac4 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.445201248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3416e3bc-362e-4d08-bca8-209df4956da9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.445926031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988624445901074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3416e3bc-362e-4d08-bca8-209df4956da9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.446463893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4135bcfe-b2a9-4ea4-a887-7b93b8756560 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.446516062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4135bcfe-b2a9-4ea4-a887-7b93b8756560 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:17:04 functional-854568 crio[5564]: time="2025-11-01 09:17:04.446795737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4135bcfe-b2a9-4ea4-a887-7b93b8756560 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e1db797037a3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     5 minutes ago       Exited              mount-munger              0                   d67ad6ff7673b       busybox-mount
	ad6d9bcb47964       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   0e5dbb626ffaf       hello-node-connect-7d85dfc575-8fqgj
	b15fd989610bc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        6 minutes ago       Running             kube-proxy                4                   ab5e8ba1a8d18       kube-proxy-p8qv6
	27e849fb394fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Running             storage-provisioner       4                   42ddeb7ee9b66       storage-provisioner
	0b2d3d715d65d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        6 minutes ago       Running             kube-apiserver            0                   21ec93d6e0dcf       kube-apiserver-functional-854568
	7e1306ed5ca1d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        6 minutes ago       Running             kube-controller-manager   4                   70138226f92eb       kube-controller-manager-functional-854568
	4dcaae31b320d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Running             etcd                      4                   61712013dba87       etcd-functional-854568
	0451d0b1d08ba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        6 minutes ago       Running             coredns                   2                   1ee40d241e597       coredns-66bc5c9577-626v2
	0bc1398379b4a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        6 minutes ago       Running             kube-scheduler            3                   58f8c972b4dbe       kube-scheduler-functional-854568
	d109feadf1871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Exited              storage-provisioner       3                   42ddeb7ee9b66       storage-provisioner
	de92f64e8c564       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        6 minutes ago       Exited              kube-controller-manager   3                   70138226f92eb       kube-controller-manager-functional-854568
	806eea7f9cd39       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Exited              etcd                      3                   61712013dba87       etcd-functional-854568
	7e71039fa4c92       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        6 minutes ago       Exited              kube-proxy                3                   ab5e8ba1a8d18       kube-proxy-p8qv6
	0fbfacd7f2a11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        7 minutes ago       Exited              kube-scheduler            2                   ff3380e3e50ee       kube-scheduler-functional-854568
	5cd8344d8c832       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        7 minutes ago       Exited              coredns                   1                   952c34f1f33f4       coredns-66bc5c9577-626v2
	
	
	==> coredns [0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40232 - 22482 "HINFO IN 5854806722054425578.3190548008883538820. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030681733s
	
	
	==> coredns [5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50651 - 59818 "HINFO IN 8748826513468128324.7719950190033398852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018360541s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-854568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-854568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=functional-854568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_08_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:08:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-854568
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:17:02 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:17:02 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:17:02 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:17:02 +0000   Sat, 01 Nov 2025 09:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    functional-854568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdac547e78d548549cd4406c550707a8
	  System UUID:                cdac547e-78d5-4854-9cd4-406c550707a8
	  Boot ID:                    4fee0e31-2a9b-4ffb-9a8e-d63cba9bf994
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-pvt5m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     hello-node-connect-7d85dfc575-8fqgj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-dqd4j                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m55s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-626v2                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m21s
	  kube-system                 etcd-functional-854568                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m26s
	  kube-system                 kube-apiserver-functional-854568              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-functional-854568     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-p8qv6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-functional-854568              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m4r9g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mk8vc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 7m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m26s                  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m26s                  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s                  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m25s                  kubelet          Node functional-854568 status is now: NodeReady
	  Normal  RegisteredNode           8m22s                  node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  NodeHasNoDiskPressure    7m19s (x8 over 7m19s)  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m19s (x8 over 7m19s)  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m19s (x7 over 7m19s)  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m12s                  node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m27s                  node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	
	
	==> dmesg <==
	[  +0.001865] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187392] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090698] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096999] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135600] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.667190] kauditd_printk_skb: 237 callbacks suppressed
	[Nov 1 09:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.107780] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.934436] kauditd_printk_skb: 338 callbacks suppressed
	[  +5.546896] kauditd_printk_skb: 75 callbacks suppressed
	[Nov 1 09:10] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.111141] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.580326] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.631439] kauditd_printk_skb: 314 callbacks suppressed
	[  +1.514979] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.072142] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 1 09:11] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.404869] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.565921] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.688476] kauditd_printk_skb: 31 callbacks suppressed
	[Nov 1 09:12] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 09:16] crun[9745]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53] <==
	{"level":"warn","ts":"2025-11-01T09:10:32.904368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.917807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.926159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.934181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.943838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.957578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.967277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.972052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.981207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.987358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.996312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.003094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.014585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.018701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.027009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.039030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.052484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.056306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.069834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.086362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.098344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.101563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.110131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.119029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.163128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	
	
	==> etcd [806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493] <==
	{"level":"info","ts":"2025-11-01T09:10:28.261021Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","recovered-remote-peer-id":"245a8df1c58de0e1","recovered-remote-peer-urls":["https://192.168.39.129:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.261035Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.261047Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-01T09:10:28.261128Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-01T09:10:28.261265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-01T09:10:28.261308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"245a8df1c58de0e1 became follower at term 3"}
	{"level":"info","ts":"2025-11-01T09:10:28.261320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 245a8df1c58de0e1 [peers: [], term: 3, commit: 566, applied: 0, lastindex: 566, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-01T09:10:28.268634Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-01T09:10:28.299822Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":520}
	{"level":"info","ts":"2025-11-01T09:10:28.319741Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-01T09:10:28.320231Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"245a8df1c58de0e1","timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:10:28.320514Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"245a8df1c58de0e1"}
	{"level":"info","ts":"2025-11-01T09:10:28.320587Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.6.4","cluster-id":"a2af9788ad7a361f","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.320895Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"245a8df1c58de0e1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321037Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321065Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321072Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.322905Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
	{"level":"info","ts":"2025-11-01T09:10:28.324060Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.324182Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.327793Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:10:28.333565Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:10:28.333610Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:10:28.334221Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2025-11-01T09:10:28.334264Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.129:2380"}
	
	
	==> kernel <==
	 09:17:04 up 9 min,  0 users,  load average: 0.51, 0.43, 0.30
	Linux functional-854568 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6] <==
	I1101 09:10:34.016511       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:10:34.016606       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:10:34.016636       1 policy_source.go:240] refreshing policies
	I1101 09:10:34.017478       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:10:34.017511       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:10:34.017517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:10:34.017522       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:10:34.017636       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:10:34.032707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:10:34.521559       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:10:34.719912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:10:35.759836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:10:35.810339       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:10:35.835705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:10:35.847097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:10:37.471420       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:10:37.521759       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:10:53.026913       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.16.89"}
	I1101 09:10:57.543323       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:10:57.668451       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.190.164"}
	I1101 09:10:58.399714       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.3.18"}
	I1101 09:11:09.020220       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.182.209"}
	I1101 09:11:45.352100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:11:45.701480       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.47.106"}
	I1101 09:11:45.721847       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.239.202"}
	
	
	==> kube-controller-manager [7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116] <==
	I1101 09:10:37.273009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:10:37.274250       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:10:37.272871       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:10:37.277632       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.280560       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:10:37.280639       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:10:37.282005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.282030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:10:37.282037       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:10:37.282491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.286271       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:10:37.288626       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:10:37.291296       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:10:37.292027       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:10:37.294812       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:10:37.301810       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:10:37.301873       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:10:37.308193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.314539       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:10:37.319476       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 09:11:45.470473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.490084       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.497121       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.519845       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.526166       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [de92f64e8c564b2e15a82533a32166c758aeafe35bbc57469519bb24cd65be57] <==
	
	
	==> kube-proxy [7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1] <==
	I1101 09:10:27.847135       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:10:27.940781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:10:27.943378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-854568&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d] <==
	I1101 09:10:35.196540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:10:35.297438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:10:35.297465       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.129"]
	E1101 09:10:35.297669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:10:35.337819       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:10:35.337886       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:10:35.337907       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:10:35.348531       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:10:35.348822       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:10:35.348835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:10:35.351142       1 config.go:309] "Starting node config controller"
	I1101 09:10:35.351171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:10:35.351178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:10:35.362216       1 config.go:200] "Starting service config controller"
	I1101 09:10:35.362396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:10:35.362429       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:10:35.363077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:10:35.362692       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:10:35.363316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:10:35.363374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:10:35.463352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:10:35.463497       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00] <==
	E1101 09:10:30.940568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.129:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:31.038150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.129:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:31.048810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:10:31.122051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.129:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:10:31.130031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.129:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:31.179610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:31.201604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:10:33.833731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:10:33.833803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:33.833850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:10:33.833894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:10:33.834511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:10:33.834805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:10:33.835005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:10:33.835221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:10:33.835472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:33.835690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:33.835916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:10:33.836419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:10:33.836534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:33.836754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:10:33.838399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:10:33.838448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:10:33.868441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:10:37.999449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c] <==
	I1101 09:09:47.905692       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:09:49.393112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:09:49.393155       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:09:49.393166       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:09:49.393171       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:09:49.503208       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:09:49.503248       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:09:49.507313       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:09:49.507909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:09:49.608223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:10:11.028026       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:10:11.028140       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:10:11.028163       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:10:11.028183       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:10:11.028202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424197    6640 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424456    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-pvt5m_default(dc5ce2a1-fb71-4117-9dec-aa7f6043b738): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:16:08 functional-854568 kubelet[6640]: E1101 09:16:08.424492    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:10 functional-854568 kubelet[6640]: E1101 09:16:10.730000    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988570729653589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:10 functional-854568 kubelet[6640]: E1101 09:16:10.730025    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988570729653589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:20 functional-854568 kubelet[6640]: E1101 09:16:20.732191    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988580731773575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:20 functional-854568 kubelet[6640]: E1101 09:16:20.732239    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988580731773575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:23 functional-854568 kubelet[6640]: E1101 09:16:23.479278    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.616364    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3b858069348de84ce0334761afe76b9b/crio-ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Error finding container ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Status 404 returned error can't find the container with id ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.616677    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod534f1588-2719-4435-9399-fcf4dff390de/crio-952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Error finding container 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Status 404 returned error can't find the container with id 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.734378    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988590733897528  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:30 functional-854568 kubelet[6640]: E1101 09:16:30.734402    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988590733897528  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:35 functional-854568 kubelet[6640]: E1101 09:16:35.478577    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:40 functional-854568 kubelet[6640]: E1101 09:16:40.737529    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988600737124638  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:40 functional-854568 kubelet[6640]: E1101 09:16:40.737556    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988600737124638  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177580}  inodes_used:{value:89}}"
	Nov 01 09:16:47 functional-854568 kubelet[6640]: E1101 09:16:47.479465    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:16:50 functional-854568 kubelet[6640]: E1101 09:16:50.740268    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988610739466143  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:16:50 functional-854568 kubelet[6640]: E1101 09:16:50.740313    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988610739466143  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:16:52 functional-854568 kubelet[6640]: E1101 09:16:52.597370    6640 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:16:52 functional-854568 kubelet[6640]: E1101 09:16:52.597442    6640 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 01 09:16:52 functional-854568 kubelet[6640]: E1101 09:16:52.597705    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g_kubernetes-dashboard(b35ccd8f-dbbd-4df5-a652-9d21e07e5964): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:16:52 functional-854568 kubelet[6640]: E1101 09:16:52.597748    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-m4r9g" podUID="b35ccd8f-dbbd-4df5-a652-9d21e07e5964"
	Nov 01 09:17:00 functional-854568 kubelet[6640]: E1101 09:17:00.742668    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988620742151271  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:17:00 functional-854568 kubelet[6640]: E1101 09:17:00.742688    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988620742151271  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:17:04 functional-854568 kubelet[6640]: E1101 09:17:04.487219    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-m4r9g" podUID="b35ccd8f-dbbd-4df5-a652-9d21e07e5964"
	
	
	==> storage-provisioner [27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df] <==
	W1101 09:16:40.504258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:42.509125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:42.518007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:44.521690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:44.527845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:46.534468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:46.548165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:48.553304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:48.560115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:50.566696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:50.577243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:52.580917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:52.589691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:54.592739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:54.597325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:56.601360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:56.607105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:58.610866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:16:58.615774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:00.618549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:00.626492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:02.629728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:02.635680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:04.641658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:17:04.654451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334] <==
	I1101 09:10:28.204289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:10:28.209290       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
helpers_test.go:269: (dbg) Run:  kubectl --context functional-854568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1 (99.176073ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:11:31 +0000
	      Finished:     Sat, 01 Nov 2025 09:11:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fvp2s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fvp2s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m7s   default-scheduler  Successfully assigned default/busybox-mount to functional-854568
	  Normal  Pulling    6m6s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m34s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.358s (32.29s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m34s  kubelet            Created container: mount-munger
	  Normal  Started    5m34s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-pvt5m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djsds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djsds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m7s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-pvt5m to functional-854568
	  Warning  Failed     3m58s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s (x2 over 5m36s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     57s (x3 over 5m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    18s (x5 over 5m35s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     18s (x5 over 5m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    6s (x4 over 6m6s)    kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-dqd4j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7rfc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c7rfc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m56s                default-scheduler  Successfully assigned default/mysql-5bb876957f-dqd4j to functional-854568
	  Warning  Failed     87s (x2 over 4m34s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     87s (x2 over 4m34s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    73s (x2 over 4m33s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     73s (x2 over 4m33s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    60s (x3 over 5m56s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:03 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bblfx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bblfx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-854568
	  Warning  Failed     5m4s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     117s (x2 over 5m4s)  kubelet            Error: ErrImagePull
	  Warning  Failed     117s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    106s (x2 over 5m3s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     106s (x2 over 5m3s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    91s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m4r9g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mk8vc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-854568 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-dqd4j" [dfb32fdc-7568-4c82-ba99-a7def15513c9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-01 09:21:09.300055584 +0000 UTC m=+2204.326732820
functional_test.go:1804: (dbg) Run:  kubectl --context functional-854568 describe po mysql-5bb876957f-dqd4j -n default
functional_test.go:1804: (dbg) kubectl --context functional-854568 describe po mysql-5bb876957f-dqd4j -n default:
Name:             mysql-5bb876957f-dqd4j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-854568/192.168.39.129
Start Time:       Sat, 01 Nov 2025 09:11:09 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7rfc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-c7rfc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-dqd4j to functional-854568
Warning  Failed     5m31s (x2 over 8m38s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m17s (x3 over 8m38s)  kubelet            Error: ErrImagePull
Warning  Failed     2m17s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    97s (x5 over 8m37s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     97s (x5 over 8m37s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    86s (x4 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-854568 logs mysql-5bb876957f-dqd4j -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-854568 logs mysql-5bb876957f-dqd4j -n default: exit status 1 (63.951323ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-dqd4j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-854568 logs mysql-5bb876957f-dqd4j -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-854568 -n functional-854568
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 logs -n 25: (1.511875683s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image save kicbase/echo-server:functional-854568 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image rm kicbase/echo-server:functional-854568 --alsologtostderr                                                                           │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ image          │ functional-854568 image save --daemon kicbase/echo-server:functional-854568 --alsologtostderr                                                                │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │ 01 Nov 25 09:11 UTC │
	│ dashboard      │ --url --port 36195 -p functional-854568 --alsologtostderr -v=1                                                                                               │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:11 UTC │                     │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ update-context │ functional-854568 update-context --alsologtostderr -v=2                                                                                                      │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format short --alsologtostderr                                                                                                  │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format yaml --alsologtostderr                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ ssh            │ functional-854568 ssh pgrep buildkitd                                                                                                                        │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │                     │
	│ image          │ functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr                                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls                                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format json --alsologtostderr                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ image          │ functional-854568 image ls --format table --alsologtostderr                                                                                                  │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:16 UTC │ 01 Nov 25 09:16 UTC │
	│ service        │ functional-854568 service list                                                                                                                               │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ service        │ functional-854568 service list -o json                                                                                                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ service        │ functional-854568 service --namespace=default --https --url hello-node                                                                                       │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ service        │ functional-854568 service hello-node --url --format={{.IP}}                                                                                                  │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ service        │ functional-854568 service hello-node --url                                                                                                                   │ functional-854568 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:11:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:11:38.234335  546374 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:11:38.234641  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234652  546374 out.go:374] Setting ErrFile to fd 2...
	I1101 09:11:38.234660  546374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.234890  546374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:11:38.235395  546374 out.go:368] Setting JSON to false
	I1101 09:11:38.236295  546374 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":64420,"bootTime":1761923878,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:11:38.236391  546374 start.go:143] virtualization: kvm guest
	I1101 09:11:38.238286  546374 out.go:179] * [functional-854568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:11:38.239579  546374 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:11:38.239605  546374 notify.go:221] Checking for updates...
	I1101 09:11:38.241694  546374 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:11:38.243040  546374 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 09:11:38.244334  546374 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 09:11:38.245586  546374 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:11:38.246693  546374 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:11:38.248214  546374 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:11:38.248667  546374 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:11:38.278963  546374 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:11:38.280328  546374 start.go:309] selected driver: kvm2
	I1101 09:11:38.280347  546374 start.go:930] validating driver "kvm2" against &{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.280465  546374 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:11:38.281519  546374 cni.go:84] Creating CNI manager for ""
	I1101 09:11:38.281589  546374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:11:38.281653  546374 start.go:353] cluster config:
	{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.283164  546374 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.109568036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988870109543647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81804c6c-8dd7-4855-8f57-863edd463044 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.110068247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a463dc4-e4cc-4953-8629-bdd016a7d72a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.110125316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a463dc4-e4cc-4953-8629-bdd016a7d72a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.110443341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a463dc4-e4cc-4953-8629-bdd016a7d72a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.158563972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7c7ca6c-4388-406b-8cca-526ac2927086 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.158709352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7c7ca6c-4388-406b-8cca-526ac2927086 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.160332883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25add9cd-0a4e-4e42-bf73-5526cb05a7b2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.161067060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988870161044350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25add9cd-0a4e-4e42-bf73-5526cb05a7b2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.161627636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8b2942e-450d-4937-9f7f-f5bb89540b3b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.161682777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8b2942e-450d-4937-9f7f-f5bb89540b3b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.162118894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8b2942e-450d-4937-9f7f-f5bb89540b3b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.199475197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5bd643b-e477-4c48-b3e7-ce672df0f2e2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.199549283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5bd643b-e477-4c48-b3e7-ce672df0f2e2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.202279414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce9a5b91-c9b2-4d8b-9b7a-3391784b3c03 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.203109755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988870203083485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce9a5b91-c9b2-4d8b-9b7a-3391784b3c03 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.204223314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=053756df-5063-42f6-83a2-fb94024ec239 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.204475448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=053756df-5063-42f6-83a2-fb94024ec239 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.204821715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=053756df-5063-42f6-83a2-fb94024ec239 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.242747396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4f08707-9223-4764-ac64-60cc7bd04585 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.242838998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4f08707-9223-4764-ac64-60cc7bd04585 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.243737107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fb8802a-1114-43b4-bcaa-9491035727dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.245582044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761988870245518382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203241,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fb8802a-1114-43b4-bcaa-9491035727dd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.246212551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0be2025b-7957-4553-9b61-57d1b90a4a51 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.246601114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0be2025b-7957-4553-9b61-57d1b90a4a51 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:21:10 functional-854568 crio[5564]: time="2025-11-01 09:21:10.247636445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1,PodSandboxId:d67ad6ff7673b08a9cc8c42942ae42dc1c4dc95cb75904a0d73bdefacfe9321e,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761988291559841371,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 249b33c1-c442-4698-8c37-9d6af53ed2fc,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6d9bcb479648406e7787a6a7f84f9254b8acb19b54aee4ce9e4edd9ab40c17,PodSandboxId:0e5dbb626ffafe655eb136e4e598093f4f7349f42c16b9697b40ea2f7815d2cc,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761988259082898736,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-8fqgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 645dc979-5e33-4017-b9c6-399736482d7d,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df,PodSandboxId:42ddeb7ee9b6605f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761988234776455282,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount:
4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:4,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761988234785739417,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6,PodSandboxId:21ec93d6e0dcfc1472ca0a8bd0345c30311f79463dfcf545e3c7c76edb53e5bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761988231321175028,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567794742ee267e0898306a2bfdc060c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"contain
erPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761988231135134598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53,PodSandboxId:61712013dba8793e05ff50b6ff4f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761988231103923096,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10,PodSandboxId:1ee40d241e597c98bab9769d8ae0cf1883e1737a1ca60de4ff46c366a9794298,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761988228712767990,Labels:map[string]string{io.kubernetes.container.name: coredns,
io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00,PodSandboxId:58f8c972b4dbedd2a539c96f4b72b7b8be76d6b72158faab4c02381a8726e773,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,}
,Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761988227786507499,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334,PodSandboxId:42ddeb7ee9b660
5f7143ce6b4a34ae2aedb45066e7a3b4753c7aa32ffab02389,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761988227575670752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e932432e-8369-4ac7-be62-15697906b114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493,PodSandboxId:61712013dba8793e05ff50b6ff4
f269eeb142cef8809b28fb70de3fa57998398,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761988227449639556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a10c03a29f4d4d9c61649b9a5d64941,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de92f64e8c564b2e15a82533a321
66c758aeafe35bbc57469519bb24cd65be57,PodSandboxId:70138226f92eb528456f8b9ea362b6f28c8d944efd0a34c0ba04075dcd37c4ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761988227542580924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7204fc2807c91c2baeb21d904e5b3e8,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1,PodSandboxId:ab5e8ba1a8d18c809b77802574cda9346aeb390ec2de791545670977d988de80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761988227321643763,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8qv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d891ac56-f0c4-46ba-bce1-fb68e7eb54a3,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c,PodSandboxId:ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761988186258636450,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-854568,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b858069348de84ce0334761afe76b9b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.k
ubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c,PodSandboxId:952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761988172472296819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-626v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534f1588-2719-4435-9399-fcf4dff390de,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0be2025b-7957-4553-9b61-57d1b90a4a51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e1db797037a3e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     9 minutes ago       Exited              mount-munger              0                   d67ad6ff7673b       busybox-mount
	ad6d9bcb47964       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   10 minutes ago      Running             echo-server               0                   0e5dbb626ffaf       hello-node-connect-7d85dfc575-8fqgj
	b15fd989610bc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        10 minutes ago      Running             kube-proxy                4                   ab5e8ba1a8d18       kube-proxy-p8qv6
	27e849fb394fc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Running             storage-provisioner       4                   42ddeb7ee9b66       storage-provisioner
	0b2d3d715d65d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        10 minutes ago      Running             kube-apiserver            0                   21ec93d6e0dcf       kube-apiserver-functional-854568
	7e1306ed5ca1d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        10 minutes ago      Running             kube-controller-manager   4                   70138226f92eb       kube-controller-manager-functional-854568
	4dcaae31b320d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        10 minutes ago      Running             etcd                      4                   61712013dba87       etcd-functional-854568
	0451d0b1d08ba       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        10 minutes ago      Running             coredns                   2                   1ee40d241e597       coredns-66bc5c9577-626v2
	0bc1398379b4a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        10 minutes ago      Running             kube-scheduler            3                   58f8c972b4dbe       kube-scheduler-functional-854568
	d109feadf1871       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Exited              storage-provisioner       3                   42ddeb7ee9b66       storage-provisioner
	de92f64e8c564       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        10 minutes ago      Exited              kube-controller-manager   3                   70138226f92eb       kube-controller-manager-functional-854568
	806eea7f9cd39       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        10 minutes ago      Exited              etcd                      3                   61712013dba87       etcd-functional-854568
	7e71039fa4c92       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        10 minutes ago      Exited              kube-proxy                3                   ab5e8ba1a8d18       kube-proxy-p8qv6
	0fbfacd7f2a11       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        11 minutes ago      Exited              kube-scheduler            2                   ff3380e3e50ee       kube-scheduler-functional-854568
	5cd8344d8c832       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        11 minutes ago      Exited              coredns                   1                   952c34f1f33f4       coredns-66bc5c9577-626v2
	
	
	==> coredns [0451d0b1d08ba6977476b1cc2964353404f0b83988abcf00a95a01b3055c6a10] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40232 - 22482 "HINFO IN 5854806722054425578.3190548008883538820. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030681733s
	
	
	==> coredns [5cd8344d8c832c19add0478500062dcd8ed023406e149142e78a049f0304e04c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50651 - 59818 "HINFO IN 8748826513468128324.7719950190033398852. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018360541s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-854568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-854568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=functional-854568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_08_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:08:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-854568
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:21:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:20:56 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:20:56 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:20:56 +0000   Sat, 01 Nov 2025 09:08:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:20:56 +0000   Sat, 01 Nov 2025 09:08:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    functional-854568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdac547e78d548549cd4406c550707a8
	  System UUID:                cdac547e-78d5-4854-9cd4-406c550707a8
	  Boot ID:                    4fee0e31-2a9b-4ffb-9a8e-d63cba9bf994
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-pvt5m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-8fqgj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-dqd4j                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-626v2                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-854568                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-854568              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-854568     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-p8qv6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-854568              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-m4r9g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mk8vc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-854568 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-854568 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-854568 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-854568 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-854568 event: Registered Node functional-854568 in Controller
	
	
	==> dmesg <==
	[  +0.001865] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.187392] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090698] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096999] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.135600] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.667190] kauditd_printk_skb: 237 callbacks suppressed
	[Nov 1 09:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.107780] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.934436] kauditd_printk_skb: 338 callbacks suppressed
	[  +5.546896] kauditd_printk_skb: 75 callbacks suppressed
	[Nov 1 09:10] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.111141] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.580326] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.631439] kauditd_printk_skb: 314 callbacks suppressed
	[  +1.514979] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.072142] kauditd_printk_skb: 109 callbacks suppressed
	[Nov 1 09:11] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.404869] kauditd_printk_skb: 26 callbacks suppressed
	[ +20.565921] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.688476] kauditd_printk_skb: 31 callbacks suppressed
	[Nov 1 09:12] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 09:16] crun[9745]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [4dcaae31b320d80c9f04d5efd24184a4beb5ba44a54a55897bc3885db2101c53] <==
	{"level":"warn","ts":"2025-11-01T09:10:32.934181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.943838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.957578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.967277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.972052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.981207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.987358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:32.996312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.003094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.014585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.018701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.027009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.039030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.052484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.056306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.069834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.086362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.098344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.101563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.110131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.119029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:10:33.163128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:20:32.452697Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2025-11-01T09:20:32.485598Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1058,"took":"30.815227ms","hash":1603543345,"current-db-size-bytes":3420160,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1495040,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-01T09:20:32.485640Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1603543345,"revision":1058,"compact-revision":-1}
	
	
	==> etcd [806eea7f9cd39165a634bd0823e0beeaf596c091f2cb1e52c537e2a119cc0493] <==
	{"level":"info","ts":"2025-11-01T09:10:28.261021Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","recovered-remote-peer-id":"245a8df1c58de0e1","recovered-remote-peer-urls":["https://192.168.39.129:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.261035Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.261047Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-01T09:10:28.261128Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-01T09:10:28.261265Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-01T09:10:28.261308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"245a8df1c58de0e1 became follower at term 3"}
	{"level":"info","ts":"2025-11-01T09:10:28.261320Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 245a8df1c58de0e1 [peers: [], term: 3, commit: 566, applied: 0, lastindex: 566, lastterm: 3]"}
	{"level":"warn","ts":"2025-11-01T09:10:28.268634Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-01T09:10:28.299822Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":520}
	{"level":"info","ts":"2025-11-01T09:10:28.319741Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-01T09:10:28.320231Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"245a8df1c58de0e1","timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:10:28.320514Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"245a8df1c58de0e1"}
	{"level":"info","ts":"2025-11-01T09:10:28.320587Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.6.4","cluster-id":"a2af9788ad7a361f","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.320895Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"245a8df1c58de0e1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321037Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321065Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.321072Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T09:10:28.322905Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
	{"level":"info","ts":"2025-11-01T09:10:28.324060Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T09:10:28.324182Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-01T09:10:28.327793Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:10:28.333565Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:10:28.333610Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:10:28.334221Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2025-11-01T09:10:28.334264Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.39.129:2380"}
	
	
	==> kernel <==
	 09:21:10 up 13 min,  0 users,  load average: 0.37, 0.36, 0.29
	Linux functional-854568 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0b2d3d715d65d8daca359ee84aa5bb213762342047206346ec68002680e2c6a6] <==
	I1101 09:10:34.016606       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:10:34.016636       1 policy_source.go:240] refreshing policies
	I1101 09:10:34.017478       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:10:34.017511       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:10:34.017517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:10:34.017522       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:10:34.017636       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:10:34.032707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:10:34.521559       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:10:34.719912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:10:35.759836       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:10:35.810339       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:10:35.835705       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:10:35.847097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:10:37.471420       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:10:37.521759       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:10:53.026913       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.16.89"}
	I1101 09:10:57.543323       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:10:57.668451       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.190.164"}
	I1101 09:10:58.399714       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.3.18"}
	I1101 09:11:09.020220       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.182.209"}
	I1101 09:11:45.352100       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:11:45.701480       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.47.106"}
	I1101 09:11:45.721847       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.239.202"}
	I1101 09:20:33.929554       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [7e1306ed5ca1da3b4bb7e6a76b365506383370faadb8ae1ef828ed8e2856a116] <==
	I1101 09:10:37.273009       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:10:37.274250       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:10:37.272871       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:10:37.277632       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.280560       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:10:37.280639       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:10:37.282005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.282030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:10:37.282037       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:10:37.282491       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:10:37.286271       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:10:37.288626       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:10:37.291296       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:10:37.292027       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:10:37.294812       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:10:37.301810       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:10:37.301873       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:10:37.308193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:10:37.314539       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:10:37.319476       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 09:11:45.470473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.490084       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.497121       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.519845       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:11:45.526166       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [de92f64e8c564b2e15a82533a32166c758aeafe35bbc57469519bb24cd65be57] <==
	
	
	==> kube-proxy [7e71039fa4c92372a4d04f9348709d0fc7cedeaa9c8d054fbf0d38ab2da2f3b1] <==
	I1101 09:10:27.847135       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:10:27.940781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1101 09:10:27.943378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-854568&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [b15fd989610bc82b5ff7d2143c752c984a8ed407cd980a1d913715ac95f1a45d] <==
	I1101 09:10:35.196540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:10:35.297438       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:10:35.297465       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.129"]
	E1101 09:10:35.297669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:10:35.337819       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:10:35.337886       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:10:35.337907       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:10:35.348531       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:10:35.348822       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:10:35.348835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:10:35.351142       1 config.go:309] "Starting node config controller"
	I1101 09:10:35.351171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:10:35.351178       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:10:35.362216       1 config.go:200] "Starting service config controller"
	I1101 09:10:35.362396       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:10:35.362429       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:10:35.363077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:10:35.362692       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:10:35.363316       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:10:35.363374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:10:35.463352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:10:35.463497       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0bc1398379b4a0842eca102935669fe8ffb1bfa5acb9325f2477e376a4ca6a00] <==
	E1101 09:10:30.940568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.129:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:31.038150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.129:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:31.048810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:10:31.122051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.129:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:10:31.130031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.129:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:31.179610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:31.201604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.129:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.129:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:10:33.833731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:10:33.833803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:10:33.833850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:10:33.833894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:10:33.834511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:10:33.834805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:10:33.835005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:10:33.835221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:10:33.835472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:10:33.835690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:10:33.835916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:10:33.836419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:10:33.836534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:10:33.836754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:10:33.838399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:10:33.838448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:10:33.868441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:10:37.999449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [0fbfacd7f2a11f1e822e898ef1c1a0d7d4c85fd05899505e011528adcfbc480c] <==
	I1101 09:09:47.905692       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:09:49.393112       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:09:49.393155       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:09:49.393166       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:09:49.393171       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:09:49.503208       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:09:49.503248       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:09:49.507313       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507383       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:09:49.507850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:09:49.507909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:09:49.608223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:10:11.028026       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:10:11.028140       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:10:11.028163       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:10:11.028183       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:10:11.028202       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:20:25 functional-854568 kubelet[6640]: E1101 09:20:25.478737    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:20:30 functional-854568 kubelet[6640]: E1101 09:20:30.616558    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod534f1588-2719-4435-9399-fcf4dff390de/crio-952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Error finding container 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34: Status 404 returned error can't find the container with id 952c34f1f33f41404348bdffb010de32512512f46f9a22c5919b2e55aadaad34
	Nov 01 09:20:30 functional-854568 kubelet[6640]: E1101 09:20:30.617078    6640 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod3b858069348de84ce0334761afe76b9b/crio-ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Error finding container ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305: Status 404 returned error can't find the container with id ff3380e3e50ee333855f1e94c42078ac4667a94d5708722ca2db9b78941f9305
	Nov 01 09:20:30 functional-854568 kubelet[6640]: E1101 09:20:30.796221    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988830795793738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:30 functional-854568 kubelet[6640]: E1101 09:20:30.796264    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988830795793738  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:31 functional-854568 kubelet[6640]: E1101 09:20:31.480480    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-m4r9g" podUID="b35ccd8f-dbbd-4df5-a652-9d21e07e5964"
	Nov 01 09:20:36 functional-854568 kubelet[6640]: E1101 09:20:36.479343    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-pvt5m" podUID="dc5ce2a1-fb71-4117-9dec-aa7f6043b738"
	Nov 01 09:20:37 functional-854568 kubelet[6640]: E1101 09:20:37.317227    6640 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:20:37 functional-854568 kubelet[6640]: E1101 09:20:37.317376    6640 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 09:20:37 functional-854568 kubelet[6640]: E1101 09:20:37.317580    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-mk8vc_kubernetes-dashboard(02ab3a50-c383-42ee-8979-1a3ef29ad317): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:20:37 functional-854568 kubelet[6640]: E1101 09:20:37.317647    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mk8vc" podUID="02ab3a50-c383-42ee-8979-1a3ef29ad317"
	Nov 01 09:20:40 functional-854568 kubelet[6640]: E1101 09:20:40.798877    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988840798536771  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:40 functional-854568 kubelet[6640]: E1101 09:20:40.798900    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988840798536771  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:50 functional-854568 kubelet[6640]: E1101 09:20:50.802196    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988850801676029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:50 functional-854568 kubelet[6640]: E1101 09:20:50.802221    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988850801676029  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:20:52 functional-854568 kubelet[6640]: E1101 09:20:52.481745    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mk8vc" podUID="02ab3a50-c383-42ee-8979-1a3ef29ad317"
	Nov 01 09:21:00 functional-854568 kubelet[6640]: E1101 09:21:00.804169    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988860803704761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:21:00 functional-854568 kubelet[6640]: E1101 09:21:00.804437    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988860803704761  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:21:04 functional-854568 kubelet[6640]: E1101 09:21:04.483339    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mk8vc" podUID="02ab3a50-c383-42ee-8979-1a3ef29ad317"
	Nov 01 09:21:07 functional-854568 kubelet[6640]: E1101 09:21:07.402444    6640 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:21:07 functional-854568 kubelet[6640]: E1101 09:21:07.402647    6640 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:21:07 functional-854568 kubelet[6640]: E1101 09:21:07.403190    6640 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(594fa138-93b5-43b5-b787-97f37ee7079c): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:21:07 functional-854568 kubelet[6640]: E1101 09:21:07.403303    6640 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="594fa138-93b5-43b5-b787-97f37ee7079c"
	Nov 01 09:21:10 functional-854568 kubelet[6640]: E1101 09:21:10.806205    6640 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761988870806002995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	Nov 01 09:21:10 functional-854568 kubelet[6640]: E1101 09:21:10.806225    6640 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761988870806002995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203241}  inodes_used:{value:105}}"
	
	
	==> storage-provisioner [27e849fb394fced4618d4f167f2d823a9c7ca62600a1d78cf02fea45d44d76df] <==
	W1101 09:20:45.823555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:47.827422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:47.832598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:49.836288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:49.842135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:51.847367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:51.852884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:53.856306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:53.861537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:55.865652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:55.874580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:57.878307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:57.885643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:59.889115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:20:59.893716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:01.898664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:01.905419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:03.910082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:03.920705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:05.924161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:05.929748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:07.933402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:07.938647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:09.943505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:21:09.960033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d109feadf1871d0729895d871197182682bc15a08c9e3b8946bde6b349051334] <==
	I1101 09:10:28.204289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:10:28.209290       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
helpers_test.go:269: (dbg) Run:  kubectl --context functional-854568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1 (97.510737ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://e1db797037a3e231a8ffd1c56a3e45cc9827cda7e2a2a278c8d970fdbd3df2b1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:11:31 +0000
	      Finished:     Sat, 01 Nov 2025 09:11:31 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fvp2s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fvp2s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-854568
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m40s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.358s (32.29s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m40s  kubelet            Created container: mount-munger
	  Normal  Started    9m40s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-pvt5m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djsds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djsds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-pvt5m to functional-854568
	  Warning  Failed     8m4s                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x3 over 9m42s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x4 over 9m42s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    35s (x11 over 9m41s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     35s (x11 over 9m41s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    21s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-dqd4j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:09 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c7rfc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-c7rfc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-dqd4j to functional-854568
	  Warning  Failed     5m33s (x2 over 8m40s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m19s (x3 over 8m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m19s                  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    99s (x5 over 8m39s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     99s (x5 over 8m39s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    88s (x4 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-854568/192.168.39.129
	Start Time:       Sat, 01 Nov 2025 09:11:03 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bblfx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bblfx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-854568
	  Warning  Failed     6m3s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m40s (x5 over 9m9s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m40s (x5 over 9m9s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m27s (x4 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4s (x3 over 9m10s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4s (x4 over 9m10s)    kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-m4r9g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-mk8vc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-854568 describe pod busybox-mount hello-node-75c85bcc94-pvt5m mysql-5bb876957f-dqd4j sp-pod dashboard-metrics-scraper-77bf4d6c4c-m4r9g kubernetes-dashboard-855c9754f9-mk8vc: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-854568 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-854568 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-pvt5m" [dc5ce2a1-fb71-4117-9dec-aa7f6043b738] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-854568 -n functional-854568
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 09:20:58.658869179 +0000 UTC m=+2193.685546423
functional_test.go:1460: (dbg) Run:  kubectl --context functional-854568 describe po hello-node-75c85bcc94-pvt5m -n default
functional_test.go:1460: (dbg) kubectl --context functional-854568 describe po hello-node-75c85bcc94-pvt5m -n default:
Name:             hello-node-75c85bcc94-pvt5m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-854568/192.168.39.129
Start Time:       Sat, 01 Nov 2025 09:10:58 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djsds (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-djsds:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-pvt5m to functional-854568
Warning  Failed     7m51s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     95s (x3 over 9m29s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     95s (x4 over 9m29s)   kubelet            Error: ErrImagePull
Normal   BackOff    22s (x11 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     22s (x11 over 9m28s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-854568 logs hello-node-75c85bcc94-pvt5m -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-854568 logs hello-node-75c85bcc94-pvt5m -n default: exit status 1 (74.782127ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-pvt5m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-854568 logs hello-node-75c85bcc94-pvt5m -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 service --namespace=default --https --url hello-node: exit status 115 (258.94951ms)

                                                
                                                
-- stdout --
	https://192.168.39.129:30491
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-854568 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 service hello-node --url --format={{.IP}}: exit status 115 (268.958675ms)

                                                
                                                
-- stdout --
	192.168.39.129
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-854568 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 service hello-node --url: exit status 115 (262.912316ms)

                                                
                                                
-- stdout --
	http://192.168.39.129:30491
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-854568 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.129:30491
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestPreload (155.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-413642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 10:00:56.882360  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-413642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m33.892339573s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-413642 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-413642 image pull gcr.io/k8s-minikube/busybox: (2.49086843s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-413642
E1101 10:01:35.408714  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-413642: (7.328181989s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-413642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-413642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (49.032813011s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-413642 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-01 10:02:25.231016699 +0000 UTC m=+4680.257693951
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-413642 -n test-preload-413642
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-413642 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-413642 logs -n 25: (1.106211719s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-119623 ssh -n multinode-119623-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ ssh     │ multinode-119623 ssh -n multinode-119623 sudo cat /home/docker/cp-test_multinode-119623-m03_multinode-119623.txt                                          │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ cp      │ multinode-119623 cp multinode-119623-m03:/home/docker/cp-test.txt multinode-119623-m02:/home/docker/cp-test_multinode-119623-m03_multinode-119623-m02.txt │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ ssh     │ multinode-119623 ssh -n multinode-119623-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ ssh     │ multinode-119623 ssh -n multinode-119623-m02 sudo cat /home/docker/cp-test_multinode-119623-m03_multinode-119623-m02.txt                                  │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ node    │ multinode-119623 node stop m03                                                                                                                            │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:48 UTC │
	│ node    │ multinode-119623 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:48 UTC │ 01 Nov 25 09:49 UTC │
	│ node    │ list -p multinode-119623                                                                                                                                  │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ stop    │ -p multinode-119623                                                                                                                                       │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:51 UTC │
	│ start   │ -p multinode-119623 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:51 UTC │ 01 Nov 25 09:54 UTC │
	│ node    │ list -p multinode-119623                                                                                                                                  │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │                     │
	│ node    │ multinode-119623 node delete m03                                                                                                                          │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ stop    │ multinode-119623 stop                                                                                                                                     │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:57 UTC │
	│ start   │ -p multinode-119623 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:57 UTC │ 01 Nov 25 09:59 UTC │
	│ node    │ list -p multinode-119623                                                                                                                                  │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │                     │
	│ start   │ -p multinode-119623-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-119623-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │                     │
	│ start   │ -p multinode-119623-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-119623-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
	│ node    │ add -p multinode-119623                                                                                                                                   │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │                     │
	│ delete  │ -p multinode-119623-m03                                                                                                                                   │ multinode-119623-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
	│ delete  │ -p multinode-119623                                                                                                                                       │ multinode-119623     │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
	│ start   │ -p test-preload-413642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-413642  │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 10:01 UTC │
	│ image   │ test-preload-413642 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-413642  │ jenkins │ v1.37.0 │ 01 Nov 25 10:01 UTC │ 01 Nov 25 10:01 UTC │
	│ stop    │ -p test-preload-413642                                                                                                                                    │ test-preload-413642  │ jenkins │ v1.37.0 │ 01 Nov 25 10:01 UTC │ 01 Nov 25 10:01 UTC │
	│ start   │ -p test-preload-413642 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-413642  │ jenkins │ v1.37.0 │ 01 Nov 25 10:01 UTC │ 01 Nov 25 10:02 UTC │
	│ image   │ test-preload-413642 image list                                                                                                                            │ test-preload-413642  │ jenkins │ v1.37.0 │ 01 Nov 25 10:02 UTC │ 01 Nov 25 10:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:01:36
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:01:36.048715  565615 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:01:36.049027  565615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:01:36.049037  565615 out.go:374] Setting ErrFile to fd 2...
	I1101 10:01:36.049041  565615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:01:36.049224  565615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 10:01:36.049663  565615 out.go:368] Setting JSON to false
	I1101 10:01:36.050703  565615 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67418,"bootTime":1761923878,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:01:36.050796  565615 start.go:143] virtualization: kvm guest
	I1101 10:01:36.052783  565615 out.go:179] * [test-preload-413642] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:01:36.053875  565615 notify.go:221] Checking for updates...
	I1101 10:01:36.053926  565615 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:01:36.055218  565615 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:01:36.056507  565615 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:01:36.057586  565615 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:01:36.062133  565615 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:01:36.063348  565615 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:01:36.064938  565615 config.go:182] Loaded profile config "test-preload-413642": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:01:36.066581  565615 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:01:36.067582  565615 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:01:36.103100  565615 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:01:36.104277  565615 start.go:309] selected driver: kvm2
	I1101 10:01:36.104292  565615 start.go:930] validating driver "kvm2" against &{Name:test-preload-413642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-413642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:01:36.104431  565615 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:01:36.105814  565615 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:01:36.105865  565615 cni.go:84] Creating CNI manager for ""
	I1101 10:01:36.105954  565615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:01:36.106004  565615 start.go:353] cluster config:
	{Name:test-preload-413642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-413642 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:01:36.106122  565615 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:01:36.107524  565615 out.go:179] * Starting "test-preload-413642" primary control-plane node in "test-preload-413642" cluster
	I1101 10:01:36.108567  565615 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:01:36.130980  565615 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:01:36.131078  565615 cache.go:59] Caching tarball of preloaded images
	I1101 10:01:36.131295  565615 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:01:36.132945  565615 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 10:01:36.133968  565615 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 10:01:36.162173  565615 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 10:01:36.162219  565615 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:01:38.723986  565615 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 10:01:38.724144  565615 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/config.json ...
	I1101 10:01:38.724373  565615 start.go:360] acquireMachinesLock for test-preload-413642: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:01:38.724450  565615 start.go:364] duration metric: took 52.15µs to acquireMachinesLock for "test-preload-413642"
	I1101 10:01:38.724466  565615 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:01:38.724474  565615 fix.go:54] fixHost starting: 
	I1101 10:01:38.726401  565615 fix.go:112] recreateIfNeeded on test-preload-413642: state=Stopped err=<nil>
	W1101 10:01:38.726432  565615 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:01:38.728310  565615 out.go:252] * Restarting existing kvm2 VM for "test-preload-413642" ...
	I1101 10:01:38.728361  565615 main.go:143] libmachine: starting domain...
	I1101 10:01:38.728374  565615 main.go:143] libmachine: ensuring networks are active...
	I1101 10:01:38.729381  565615 main.go:143] libmachine: Ensuring network default is active
	I1101 10:01:38.729846  565615 main.go:143] libmachine: Ensuring network mk-test-preload-413642 is active
	I1101 10:01:38.730386  565615 main.go:143] libmachine: getting domain XML...
	I1101 10:01:38.731622  565615 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-413642</name>
	  <uuid>83cb3be6-abdb-479d-bf2b-0d123971ae09</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/test-preload-413642.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c0:53:22'/>
	      <source network='mk-test-preload-413642'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:52:76'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:01:40.017245  565615 main.go:143] libmachine: waiting for domain to start...
	I1101 10:01:40.018782  565615 main.go:143] libmachine: domain is now running
	I1101 10:01:40.018798  565615 main.go:143] libmachine: waiting for IP...
	I1101 10:01:40.019578  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:40.020212  565615 main.go:143] libmachine: domain test-preload-413642 has current primary IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:40.020225  565615 main.go:143] libmachine: found domain IP: 192.168.39.175
	I1101 10:01:40.020230  565615 main.go:143] libmachine: reserving static IP address...
	I1101 10:01:40.020660  565615 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-413642", mac: "52:54:00:c0:53:22", ip: "192.168.39.175"} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:00:08 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:40.020694  565615 main.go:143] libmachine: skip adding static IP to network mk-test-preload-413642 - found existing host DHCP lease matching {name: "test-preload-413642", mac: "52:54:00:c0:53:22", ip: "192.168.39.175"}
	I1101 10:01:40.020710  565615 main.go:143] libmachine: reserved static IP address 192.168.39.175 for domain test-preload-413642
	I1101 10:01:40.020717  565615 main.go:143] libmachine: waiting for SSH...
	I1101 10:01:40.020724  565615 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:01:40.023287  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:40.023736  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:00:08 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:40.023765  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:40.023921  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:40.024148  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:40.024160  565615 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:01:43.085324  565615 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.175:22: connect: no route to host
	I1101 10:01:49.165297  565615 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.175:22: connect: no route to host
	I1101 10:01:52.166010  565615 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.175:22: connect: connection refused
	I1101 10:01:55.283063  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:01:55.286877  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.287314  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.287346  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.287625  565615 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/config.json ...
	I1101 10:01:55.287870  565615 machine.go:94] provisionDockerMachine start ...
	I1101 10:01:55.290178  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.290527  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.290554  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.290728  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:55.290949  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:55.290963  565615 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:01:55.405111  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:01:55.405157  565615 buildroot.go:166] provisioning hostname "test-preload-413642"
	I1101 10:01:55.408263  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.408646  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.408676  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.408905  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:55.409173  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:55.409193  565615 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-413642 && echo "test-preload-413642" | sudo tee /etc/hostname
	I1101 10:01:55.543960  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-413642
	
	I1101 10:01:55.547453  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.547964  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.548003  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.548260  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:55.548539  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:55.548560  565615 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-413642' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-413642/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-413642' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:01:55.676322  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:01:55.676358  565615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:01:55.676409  565615 buildroot.go:174] setting up certificates
	I1101 10:01:55.676418  565615 provision.go:84] configureAuth start
	I1101 10:01:55.679537  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.680032  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.680060  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.682762  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.683209  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.683235  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.683415  565615 provision.go:143] copyHostCerts
	I1101 10:01:55.683488  565615 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:01:55.683518  565615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:01:55.683613  565615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:01:55.683717  565615 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:01:55.683726  565615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:01:55.683757  565615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:01:55.683814  565615 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:01:55.683821  565615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:01:55.683843  565615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:01:55.683890  565615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.test-preload-413642 san=[127.0.0.1 192.168.39.175 localhost minikube test-preload-413642]
	I1101 10:01:55.737336  565615 provision.go:177] copyRemoteCerts
	I1101 10:01:55.737407  565615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:01:55.740879  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.741811  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.741846  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.742073  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:01:55.832964  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:01:55.865059  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:01:55.897149  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:01:55.928590  565615 provision.go:87] duration metric: took 252.156238ms to configureAuth
	I1101 10:01:55.928629  565615 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:01:55.928800  565615 config.go:182] Loaded profile config "test-preload-413642": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:01:55.931568  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.932017  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:55.932045  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:55.932345  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:55.932558  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:55.932574  565615 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:01:56.189650  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:01:56.189696  565615 machine.go:97] duration metric: took 901.810327ms to provisionDockerMachine
	I1101 10:01:56.189711  565615 start.go:293] postStartSetup for "test-preload-413642" (driver="kvm2")
	I1101 10:01:56.189730  565615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:01:56.189807  565615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:01:56.192942  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.193466  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:56.193506  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.193678  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:01:56.283524  565615 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:01:56.288888  565615 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:01:56.288933  565615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:01:56.289006  565615 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:01:56.289103  565615 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:01:56.289217  565615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:01:56.301851  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:01:56.333597  565615 start.go:296] duration metric: took 143.840153ms for postStartSetup
	I1101 10:01:56.333666  565615 fix.go:56] duration metric: took 17.609188954s for fixHost
	I1101 10:01:56.336599  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.337088  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:56.337118  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.337316  565615 main.go:143] libmachine: Using SSH client type: native
	I1101 10:01:56.337557  565615 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1101 10:01:56.337571  565615 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:01:56.451879  565615 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991316.406243702
	
	I1101 10:01:56.451925  565615 fix.go:216] guest clock: 1761991316.406243702
	I1101 10:01:56.451938  565615 fix.go:229] Guest: 2025-11-01 10:01:56.406243702 +0000 UTC Remote: 2025-11-01 10:01:56.333672779 +0000 UTC m=+20.334441767 (delta=72.570923ms)
	I1101 10:01:56.451960  565615 fix.go:200] guest clock delta is within tolerance: 72.570923ms
	I1101 10:01:56.451968  565615 start.go:83] releasing machines lock for "test-preload-413642", held for 17.72750677s
	I1101 10:01:56.455225  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.455620  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:56.455646  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.456256  565615 ssh_runner.go:195] Run: cat /version.json
	I1101 10:01:56.456334  565615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:01:56.459446  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.459746  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:56.459763  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.459772  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.459977  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:01:56.460331  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:56.460362  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:56.460564  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:01:56.543511  565615 ssh_runner.go:195] Run: systemctl --version
	I1101 10:01:56.572782  565615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:01:56.719599  565615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:01:56.727491  565615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:01:56.727579  565615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:01:56.756519  565615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:01:56.756554  565615 start.go:496] detecting cgroup driver to use...
	I1101 10:01:56.756648  565615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:01:56.778200  565615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:01:56.795818  565615 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:01:56.795918  565615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:01:56.814640  565615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:01:56.832454  565615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:01:56.985430  565615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:01:57.202951  565615 docker.go:234] disabling docker service ...
	I1101 10:01:57.203024  565615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:01:57.219598  565615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:01:57.234976  565615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:01:57.404157  565615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:01:57.544528  565615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:01:57.561846  565615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:01:57.586859  565615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 10:01:57.586964  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.600907  565615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:01:57.600989  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.614606  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.628273  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.641677  565615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:01:57.655562  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.668790  565615 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.691004  565615 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:01:57.705031  565615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:01:57.716722  565615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:01:57.716807  565615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:01:57.738807  565615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:01:57.751746  565615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:01:57.897311  565615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:01:58.010835  565615 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:01:58.010953  565615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:01:58.016916  565615 start.go:564] Will wait 60s for crictl version
	I1101 10:01:58.017006  565615 ssh_runner.go:195] Run: which crictl
	I1101 10:01:58.021630  565615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:01:58.066508  565615 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:01:58.066619  565615 ssh_runner.go:195] Run: crio --version
	I1101 10:01:58.097107  565615 ssh_runner.go:195] Run: crio --version
	I1101 10:01:58.129469  565615 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1101 10:01:58.133207  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:58.133595  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:01:58.133615  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:01:58.133829  565615 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 10:01:58.138982  565615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:01:58.154417  565615 kubeadm.go:884] updating cluster {Name:test-preload-413642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-413642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:01:58.154520  565615 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:01:58.154561  565615 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:01:58.194659  565615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1101 10:01:58.194750  565615 ssh_runner.go:195] Run: which lz4
	I1101 10:01:58.199520  565615 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:01:58.204682  565615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:01:58.204718  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1101 10:01:59.828417  565615 crio.go:462] duration metric: took 1.628926562s to copy over tarball
	I1101 10:01:59.828527  565615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:02:01.646615  565615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.818047929s)
	I1101 10:02:01.646654  565615 crio.go:469] duration metric: took 1.818191892s to extract the tarball
	I1101 10:02:01.646665  565615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:02:01.687028  565615 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:02:01.731535  565615 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:02:01.731579  565615 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:02:01.731589  565615 kubeadm.go:935] updating node { 192.168.39.175 8443 v1.32.0 crio true true} ...
	I1101 10:02:01.731728  565615 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-413642 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-413642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:02:01.731827  565615 ssh_runner.go:195] Run: crio config
	I1101 10:02:01.779182  565615 cni.go:84] Creating CNI manager for ""
	I1101 10:02:01.779230  565615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:02:01.779266  565615 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:02:01.779297  565615 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-413642 NodeName:test-preload-413642 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:02:01.779491  565615 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-413642"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.175"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:02:01.779563  565615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 10:02:01.792401  565615 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:02:01.792491  565615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:02:01.804367  565615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1101 10:02:01.826058  565615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:02:01.847323  565615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 10:02:01.869661  565615 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I1101 10:02:01.874188  565615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:02:01.889641  565615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:02:02.034759  565615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:02:02.056573  565615 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642 for IP: 192.168.39.175
	I1101 10:02:02.056598  565615 certs.go:195] generating shared ca certs ...
	I1101 10:02:02.056615  565615 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:02:02.056800  565615 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 10:02:02.056855  565615 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 10:02:02.056870  565615 certs.go:257] generating profile certs ...
	I1101 10:02:02.057015  565615 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.key
	I1101 10:02:02.057126  565615 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/apiserver.key.3aabe53f
	I1101 10:02:02.057181  565615 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/proxy-client.key
	I1101 10:02:02.057340  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem (1338 bytes)
	W1101 10:02:02.057385  565615 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515_empty.pem, impossibly tiny 0 bytes
	I1101 10:02:02.057398  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:02:02.057429  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:02:02.057459  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:02:02.057501  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 10:02:02.057554  565615 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:02:02.058155  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:02:02.098467  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:02:02.134748  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:02:02.169105  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:02:02.201592  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:02:02.233463  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:02:02.265266  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:02:02.297172  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:02:02.329065  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem --> /usr/share/ca-certificates/534515.pem (1338 bytes)
	I1101 10:02:02.361304  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /usr/share/ca-certificates/5345152.pem (1708 bytes)
	I1101 10:02:02.393106  565615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:02:02.425029  565615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:02:02.447716  565615 ssh_runner.go:195] Run: openssl version
	I1101 10:02:02.454549  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:02:02.468292  565615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:02:02.474170  565615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:02:02.474255  565615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:02:02.482351  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:02:02.496622  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534515.pem && ln -fs /usr/share/ca-certificates/534515.pem /etc/ssl/certs/534515.pem"
	I1101 10:02:02.511344  565615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534515.pem
	I1101 10:02:02.517339  565615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:07 /usr/share/ca-certificates/534515.pem
	I1101 10:02:02.517410  565615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534515.pem
	I1101 10:02:02.525010  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534515.pem /etc/ssl/certs/51391683.0"
	I1101 10:02:02.539178  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5345152.pem && ln -fs /usr/share/ca-certificates/5345152.pem /etc/ssl/certs/5345152.pem"
	I1101 10:02:02.553499  565615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5345152.pem
	I1101 10:02:02.559381  565615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:07 /usr/share/ca-certificates/5345152.pem
	I1101 10:02:02.559464  565615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5345152.pem
	I1101 10:02:02.566996  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5345152.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:02:02.581374  565615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:02:02.587543  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:02:02.595542  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:02:02.603805  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:02:02.612137  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:02:02.619971  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:02:02.628024  565615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:02:02.635943  565615 kubeadm.go:401] StartCluster: {Name:test-preload-413642 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-413642 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:02:02.636052  565615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:02:02.636143  565615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:02:02.676919  565615 cri.go:89] found id: ""
	I1101 10:02:02.677019  565615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:02:02.690033  565615 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:02:02.690069  565615 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:02:02.690122  565615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:02:02.702702  565615 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:02:02.703130  565615 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-413642" does not appear in /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:02:02.703228  565615 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-530629/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-413642" cluster setting kubeconfig missing "test-preload-413642" context setting]
	I1101 10:02:02.703535  565615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:02:02.704120  565615 kapi.go:59] client config for test-preload-413642: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:02:02.704513  565615 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:02:02.704527  565615 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:02:02.704533  565615 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:02:02.704537  565615 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:02:02.704540  565615 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:02:02.704939  565615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:02:02.716928  565615 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.175
	I1101 10:02:02.716965  565615 kubeadm.go:1161] stopping kube-system containers ...
	I1101 10:02:02.716979  565615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 10:02:02.717050  565615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:02:02.759069  565615 cri.go:89] found id: ""
	I1101 10:02:02.759164  565615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 10:02:02.785758  565615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:02:02.798457  565615 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:02:02.798479  565615 kubeadm.go:158] found existing configuration files:
	
	I1101 10:02:02.798528  565615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:02:02.810583  565615 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:02:02.810647  565615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:02:02.823626  565615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:02:02.835497  565615 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:02:02.835564  565615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:02:02.848261  565615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:02:02.859948  565615 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:02:02.860008  565615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:02:02.872499  565615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:02:02.884005  565615 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:02:02.884073  565615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:02:02.896328  565615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:02:02.909120  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:02.965864  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:04.221739  565615 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.255832732s)
	I1101 10:02:04.221828  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:04.460409  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:04.532101  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:04.629689  565615 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:02:04.629787  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:05.130264  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:05.630072  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:06.130463  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:06.630148  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:07.130148  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:07.160544  565615 api_server.go:72] duration metric: took 2.530872463s to wait for apiserver process to appear ...
	I1101 10:02:07.160578  565615 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:02:07.160601  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:09.825068  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:02:09.825104  565615 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:02:09.825140  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:09.943116  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:02:09.943152  565615 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:02:10.161601  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:10.166735  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:02:10.166762  565615 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:02:10.661456  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:10.666013  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:02:10.666042  565615 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:02:11.160780  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:11.167829  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1101 10:02:11.176921  565615 api_server.go:141] control plane version: v1.32.0
	I1101 10:02:11.176949  565615 api_server.go:131] duration metric: took 4.016363453s to wait for apiserver health ...
	I1101 10:02:11.176958  565615 cni.go:84] Creating CNI manager for ""
	I1101 10:02:11.176965  565615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:02:11.178597  565615 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 10:02:11.179938  565615 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:02:11.199512  565615 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:02:11.255578  565615 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:02:11.262263  565615 system_pods.go:59] 7 kube-system pods found
	I1101 10:02:11.262307  565615 system_pods.go:61] "coredns-668d6bf9bc-g5l7p" [aae6a77e-21a1-4ae2-8fd6-c3e685401bdf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:02:11.262317  565615 system_pods.go:61] "etcd-test-preload-413642" [0ed6a6e5-95cc-4e39-93f4-07028b76fc31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:02:11.262328  565615 system_pods.go:61] "kube-apiserver-test-preload-413642" [f6a146a1-b1cc-4353-badf-a86e86660b27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:02:11.262337  565615 system_pods.go:61] "kube-controller-manager-test-preload-413642" [56cb3a0e-a8ee-4336-b5d1-98495e3a7b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:02:11.262349  565615 system_pods.go:61] "kube-proxy-r9mqh" [c77dbd17-18eb-425a-94cb-767f154ed69b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:02:11.262360  565615 system_pods.go:61] "kube-scheduler-test-preload-413642" [a3336053-1282-4805-b2a0-e8e174d92db3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:02:11.262368  565615 system_pods.go:61] "storage-provisioner" [7a34922d-7147-436a-ace3-a11201d0bf49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:02:11.262378  565615 system_pods.go:74] duration metric: took 6.774092ms to wait for pod list to return data ...
	I1101 10:02:11.262390  565615 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:02:11.268197  565615 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:02:11.268227  565615 node_conditions.go:123] node cpu capacity is 2
	I1101 10:02:11.268242  565615 node_conditions.go:105] duration metric: took 5.846725ms to run NodePressure ...
	I1101 10:02:11.268309  565615 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:02:11.553033  565615 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 10:02:11.559102  565615 kubeadm.go:744] kubelet initialised
	I1101 10:02:11.559124  565615 kubeadm.go:745] duration metric: took 6.064257ms waiting for restarted kubelet to initialise ...
	I1101 10:02:11.559142  565615 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:02:11.577232  565615 ops.go:34] apiserver oom_adj: -16
	I1101 10:02:11.577271  565615 kubeadm.go:602] duration metric: took 8.887192885s to restartPrimaryControlPlane
	I1101 10:02:11.577287  565615 kubeadm.go:403] duration metric: took 8.94135792s to StartCluster
	I1101 10:02:11.577316  565615 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:02:11.577431  565615 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:02:11.578158  565615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:02:11.578410  565615 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:02:11.578540  565615 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:02:11.578621  565615 config.go:182] Loaded profile config "test-preload-413642": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:02:11.578663  565615 addons.go:70] Setting default-storageclass=true in profile "test-preload-413642"
	I1101 10:02:11.578679  565615 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-413642"
	I1101 10:02:11.578653  565615 addons.go:70] Setting storage-provisioner=true in profile "test-preload-413642"
	I1101 10:02:11.578727  565615 addons.go:239] Setting addon storage-provisioner=true in "test-preload-413642"
	W1101 10:02:11.578738  565615 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:02:11.578757  565615 host.go:66] Checking if "test-preload-413642" exists ...
	I1101 10:02:11.580159  565615 out.go:179] * Verifying Kubernetes components...
	I1101 10:02:11.581166  565615 kapi.go:59] client config for test-preload-413642: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:02:11.581458  565615 addons.go:239] Setting addon default-storageclass=true in "test-preload-413642"
	W1101 10:02:11.581474  565615 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:02:11.581502  565615 host.go:66] Checking if "test-preload-413642" exists ...
	I1101 10:02:11.582353  565615 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:02:11.582407  565615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:02:11.583093  565615 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:02:11.583114  565615 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:02:11.583757  565615 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:02:11.583777  565615 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:02:11.586052  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:02:11.586443  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:02:11.586479  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:02:11.586642  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:02:11.586771  565615 main.go:143] libmachine: domain test-preload-413642 has defined MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:02:11.587217  565615 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:53:22", ip: ""} in network mk-test-preload-413642: {Iface:virbr1 ExpiryTime:2025-11-01 11:01:51 +0000 UTC Type:0 Mac:52:54:00:c0:53:22 Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-413642 Clientid:01:52:54:00:c0:53:22}
	I1101 10:02:11.587252  565615 main.go:143] libmachine: domain test-preload-413642 has defined IP address 192.168.39.175 and MAC address 52:54:00:c0:53:22 in network mk-test-preload-413642
	I1101 10:02:11.587399  565615 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/test-preload-413642/id_rsa Username:docker}
	I1101 10:02:11.811777  565615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:02:11.842390  565615 node_ready.go:35] waiting up to 6m0s for node "test-preload-413642" to be "Ready" ...
	I1101 10:02:11.848221  565615 node_ready.go:49] node "test-preload-413642" is "Ready"
	I1101 10:02:11.848261  565615 node_ready.go:38] duration metric: took 5.815338ms for node "test-preload-413642" to be "Ready" ...
	I1101 10:02:11.848280  565615 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:02:11.848332  565615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:02:11.871959  565615 api_server.go:72] duration metric: took 293.506668ms to wait for apiserver process to appear ...
	I1101 10:02:11.872001  565615 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:02:11.872033  565615 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I1101 10:02:11.877739  565615 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I1101 10:02:11.879123  565615 api_server.go:141] control plane version: v1.32.0
	I1101 10:02:11.879151  565615 api_server.go:131] duration metric: took 7.139957ms to wait for apiserver health ...
	I1101 10:02:11.879162  565615 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:02:11.883115  565615 system_pods.go:59] 7 kube-system pods found
	I1101 10:02:11.883156  565615 system_pods.go:61] "coredns-668d6bf9bc-g5l7p" [aae6a77e-21a1-4ae2-8fd6-c3e685401bdf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:02:11.883166  565615 system_pods.go:61] "etcd-test-preload-413642" [0ed6a6e5-95cc-4e39-93f4-07028b76fc31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:02:11.883178  565615 system_pods.go:61] "kube-apiserver-test-preload-413642" [f6a146a1-b1cc-4353-badf-a86e86660b27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:02:11.883188  565615 system_pods.go:61] "kube-controller-manager-test-preload-413642" [56cb3a0e-a8ee-4336-b5d1-98495e3a7b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:02:11.883195  565615 system_pods.go:61] "kube-proxy-r9mqh" [c77dbd17-18eb-425a-94cb-767f154ed69b] Running
	I1101 10:02:11.883207  565615 system_pods.go:61] "kube-scheduler-test-preload-413642" [a3336053-1282-4805-b2a0-e8e174d92db3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:02:11.883212  565615 system_pods.go:61] "storage-provisioner" [7a34922d-7147-436a-ace3-a11201d0bf49] Running
	I1101 10:02:11.883220  565615 system_pods.go:74] duration metric: took 4.050413ms to wait for pod list to return data ...
	I1101 10:02:11.883234  565615 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:02:11.889096  565615 default_sa.go:45] found service account: "default"
	I1101 10:02:11.889124  565615 default_sa.go:55] duration metric: took 5.883351ms for default service account to be created ...
	I1101 10:02:11.889138  565615 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:02:11.894362  565615 system_pods.go:86] 7 kube-system pods found
	I1101 10:02:11.894404  565615 system_pods.go:89] "coredns-668d6bf9bc-g5l7p" [aae6a77e-21a1-4ae2-8fd6-c3e685401bdf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:02:11.894419  565615 system_pods.go:89] "etcd-test-preload-413642" [0ed6a6e5-95cc-4e39-93f4-07028b76fc31] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:02:11.894431  565615 system_pods.go:89] "kube-apiserver-test-preload-413642" [f6a146a1-b1cc-4353-badf-a86e86660b27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:02:11.894442  565615 system_pods.go:89] "kube-controller-manager-test-preload-413642" [56cb3a0e-a8ee-4336-b5d1-98495e3a7b3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:02:11.894449  565615 system_pods.go:89] "kube-proxy-r9mqh" [c77dbd17-18eb-425a-94cb-767f154ed69b] Running
	I1101 10:02:11.894458  565615 system_pods.go:89] "kube-scheduler-test-preload-413642" [a3336053-1282-4805-b2a0-e8e174d92db3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:02:11.894467  565615 system_pods.go:89] "storage-provisioner" [7a34922d-7147-436a-ace3-a11201d0bf49] Running
	I1101 10:02:11.894476  565615 system_pods.go:126] duration metric: took 5.331234ms to wait for k8s-apps to be running ...
	I1101 10:02:11.894488  565615 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:02:11.894548  565615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:02:11.918531  565615 system_svc.go:56] duration metric: took 24.02793ms WaitForService to wait for kubelet
	I1101 10:02:11.918579  565615 kubeadm.go:587] duration metric: took 340.134657ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:02:11.918609  565615 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:02:11.923189  565615 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:02:11.923230  565615 node_conditions.go:123] node cpu capacity is 2
	I1101 10:02:11.923245  565615 node_conditions.go:105] duration metric: took 4.629766ms to run NodePressure ...
	I1101 10:02:11.923262  565615 start.go:242] waiting for startup goroutines ...
	I1101 10:02:12.003413  565615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:02:12.006623  565615 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:02:12.730223  565615 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:02:12.731634  565615 addons.go:515] duration metric: took 1.153089979s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:02:12.731697  565615 start.go:247] waiting for cluster config update ...
	I1101 10:02:12.731717  565615 start.go:256] writing updated cluster config ...
	I1101 10:02:12.732028  565615 ssh_runner.go:195] Run: rm -f paused
	I1101 10:02:12.739196  565615 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:02:12.739814  565615 kapi.go:59] client config for test-preload-413642: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/test-preload-413642/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:02:12.746646  565615 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-g5l7p" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:02:14.768583  565615 pod_ready.go:104] pod "coredns-668d6bf9bc-g5l7p" is not "Ready", error: <nil>
	W1101 10:02:17.258290  565615 pod_ready.go:104] pod "coredns-668d6bf9bc-g5l7p" is not "Ready", error: <nil>
	I1101 10:02:17.754057  565615 pod_ready.go:94] pod "coredns-668d6bf9bc-g5l7p" is "Ready"
	I1101 10:02:17.754089  565615 pod_ready.go:86] duration metric: took 5.007411913s for pod "coredns-668d6bf9bc-g5l7p" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:17.756644  565615 pod_ready.go:83] waiting for pod "etcd-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:17.762440  565615 pod_ready.go:94] pod "etcd-test-preload-413642" is "Ready"
	I1101 10:02:17.762481  565615 pod_ready.go:86] duration metric: took 5.799495ms for pod "etcd-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:17.765031  565615 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:02:19.771169  565615 pod_ready.go:104] pod "kube-apiserver-test-preload-413642" is not "Ready", error: <nil>
	I1101 10:02:21.772157  565615 pod_ready.go:94] pod "kube-apiserver-test-preload-413642" is "Ready"
	I1101 10:02:21.772193  565615 pod_ready.go:86] duration metric: took 4.007132711s for pod "kube-apiserver-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:21.774281  565615 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:21.779253  565615 pod_ready.go:94] pod "kube-controller-manager-test-preload-413642" is "Ready"
	I1101 10:02:21.779277  565615 pod_ready.go:86] duration metric: took 4.974847ms for pod "kube-controller-manager-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:21.781164  565615 pod_ready.go:83] waiting for pod "kube-proxy-r9mqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:21.786229  565615 pod_ready.go:94] pod "kube-proxy-r9mqh" is "Ready"
	I1101 10:02:21.786267  565615 pod_ready.go:86] duration metric: took 5.06805ms for pod "kube-proxy-r9mqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:21.950962  565615 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:02:23.957727  565615 pod_ready.go:104] pod "kube-scheduler-test-preload-413642" is not "Ready", error: <nil>
	I1101 10:02:24.960843  565615 pod_ready.go:94] pod "kube-scheduler-test-preload-413642" is "Ready"
	I1101 10:02:24.960888  565615 pod_ready.go:86] duration metric: took 3.009895012s for pod "kube-scheduler-test-preload-413642" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:02:24.960937  565615 pod_ready.go:40] duration metric: took 12.22168635s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:02:25.006726  565615 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1101 10:02:25.008611  565615 out.go:203] 
	W1101 10:02:25.009993  565615 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1101 10:02:25.011305  565615 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:02:25.012579  565615 out.go:179] * Done! kubectl is now configured to use "test-preload-413642" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.833237518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991345833153833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4612b8c3-84fb-4d79-8a1b-4110f5220422 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.833848890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2076c797-2ebc-4c00-a7b5-3791bfb20046 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.833920787Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2076c797-2ebc-4c00-a7b5-3791bfb20046 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.834112345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a161bffd6e7cf05b34d292c9abdd833ce8739d1f3c436aecc45166ee8af6c38e,PodSandboxId:33e2c8c81858cbe1905fbbde64f48326f53823f6e075e7f4d530e38fca28a022,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761991334709559283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g5l7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae6a77e-21a1-4ae2-8fd6-c3e685401bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11873d343bf67d4d698afd9c9d2d5f7b5596bc9fb9c24fa7b4d62fd645ddff0a,PodSandboxId:bbcf62f07187a2daa5b3678bd3cf6ecaab96783286c3cc7f79cc8c189a102011,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761991331068717235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9mqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c77dbd17-18eb-425a-94cb-767f154ed69b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6789656b36feff060332926a9e65ce551d5871d2527d6dbc8887d13a9bce6b8b,PodSandboxId:680b297f066b0d02191bc17591c09954127aba759c629fceb0c45a0bf90ccf5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991331052301791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
34922d-7147-436a-ace3-a11201d0bf49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e592726e1ef2a3002a7ffd2f9823395b80775a5c180a8e95125f8dea3e60d,PodSandboxId:7c0561267d2ef44781e213b304b18077b9cc0a17ca95b41e8ba354fa529c3efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761991326604064980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e785c8578
85fc58e9cafc656d1004cc7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb87466c9a0dfa153f6b83619b9c89a667e38853690ecdbf21dc74027eb18559,PodSandboxId:b628698dc9f11708227137fb2ff36416186e8ed0fa47d71ed7cb77b894643536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761991326565093671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 79dfec767cbff07ba672c4a52a1e7b28,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db644314604bcf215befbc76a6b03965430b4ecc290b0cbaba7a3c15267633e,PodSandboxId:c3ddb5bc86def5214d662a61894de18dbbb579acb1bd572ed5dcd941cf3a35de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761991326519253382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d350
723d124ead0cd31f3d1c37647032,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f941a7f22dde7843473651e155e7c1888e091ad7e81dd9176c1774d1f4181,PodSandboxId:c3066d4dc16238bdd3ee814977c3abc47eaf8700f661b3b9423760f91ef4d311,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761991326467159239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ef2d79ba485d072e8d760b6b93955,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2076c797-2ebc-4c00-a7b5-3791bfb20046 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.875890777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11c5b018-e41a-40b4-8c0f-e7b9823d9a11 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.875958224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11c5b018-e41a-40b4-8c0f-e7b9823d9a11 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.877359468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac2381b1-8efd-4941-8b26-4a75b1546adc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.877822728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991345877797988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac2381b1-8efd-4941-8b26-4a75b1546adc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.878358006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=550bac44-3167-46cd-af42-ba60936a8c76 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.878430253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=550bac44-3167-46cd-af42-ba60936a8c76 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.878586874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a161bffd6e7cf05b34d292c9abdd833ce8739d1f3c436aecc45166ee8af6c38e,PodSandboxId:33e2c8c81858cbe1905fbbde64f48326f53823f6e075e7f4d530e38fca28a022,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761991334709559283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g5l7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae6a77e-21a1-4ae2-8fd6-c3e685401bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11873d343bf67d4d698afd9c9d2d5f7b5596bc9fb9c24fa7b4d62fd645ddff0a,PodSandboxId:bbcf62f07187a2daa5b3678bd3cf6ecaab96783286c3cc7f79cc8c189a102011,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761991331068717235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9mqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c77dbd17-18eb-425a-94cb-767f154ed69b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6789656b36feff060332926a9e65ce551d5871d2527d6dbc8887d13a9bce6b8b,PodSandboxId:680b297f066b0d02191bc17591c09954127aba759c629fceb0c45a0bf90ccf5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991331052301791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
34922d-7147-436a-ace3-a11201d0bf49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e592726e1ef2a3002a7ffd2f9823395b80775a5c180a8e95125f8dea3e60d,PodSandboxId:7c0561267d2ef44781e213b304b18077b9cc0a17ca95b41e8ba354fa529c3efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761991326604064980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e785c8578
85fc58e9cafc656d1004cc7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb87466c9a0dfa153f6b83619b9c89a667e38853690ecdbf21dc74027eb18559,PodSandboxId:b628698dc9f11708227137fb2ff36416186e8ed0fa47d71ed7cb77b894643536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761991326565093671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 79dfec767cbff07ba672c4a52a1e7b28,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db644314604bcf215befbc76a6b03965430b4ecc290b0cbaba7a3c15267633e,PodSandboxId:c3ddb5bc86def5214d662a61894de18dbbb579acb1bd572ed5dcd941cf3a35de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761991326519253382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d350
723d124ead0cd31f3d1c37647032,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f941a7f22dde7843473651e155e7c1888e091ad7e81dd9176c1774d1f4181,PodSandboxId:c3066d4dc16238bdd3ee814977c3abc47eaf8700f661b3b9423760f91ef4d311,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761991326467159239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ef2d79ba485d072e8d760b6b93955,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=550bac44-3167-46cd-af42-ba60936a8c76 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.922940296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b4ef924-9da6-4698-9409-9daf67c2b713 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.923008364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b4ef924-9da6-4698-9409-9daf67c2b713 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.924979752Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=980911a3-bba2-4fde-a394-c7c045ecde74 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.925436551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991345925413907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=980911a3-bba2-4fde-a394-c7c045ecde74 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.925963948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7d663a3-ff85-455f-8d02-4e43981b33df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.926246660Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7d663a3-ff85-455f-8d02-4e43981b33df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.926569135Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a161bffd6e7cf05b34d292c9abdd833ce8739d1f3c436aecc45166ee8af6c38e,PodSandboxId:33e2c8c81858cbe1905fbbde64f48326f53823f6e075e7f4d530e38fca28a022,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761991334709559283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g5l7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae6a77e-21a1-4ae2-8fd6-c3e685401bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11873d343bf67d4d698afd9c9d2d5f7b5596bc9fb9c24fa7b4d62fd645ddff0a,PodSandboxId:bbcf62f07187a2daa5b3678bd3cf6ecaab96783286c3cc7f79cc8c189a102011,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761991331068717235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9mqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c77dbd17-18eb-425a-94cb-767f154ed69b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6789656b36feff060332926a9e65ce551d5871d2527d6dbc8887d13a9bce6b8b,PodSandboxId:680b297f066b0d02191bc17591c09954127aba759c629fceb0c45a0bf90ccf5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991331052301791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
34922d-7147-436a-ace3-a11201d0bf49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e592726e1ef2a3002a7ffd2f9823395b80775a5c180a8e95125f8dea3e60d,PodSandboxId:7c0561267d2ef44781e213b304b18077b9cc0a17ca95b41e8ba354fa529c3efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761991326604064980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e785c8578
85fc58e9cafc656d1004cc7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb87466c9a0dfa153f6b83619b9c89a667e38853690ecdbf21dc74027eb18559,PodSandboxId:b628698dc9f11708227137fb2ff36416186e8ed0fa47d71ed7cb77b894643536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761991326565093671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 79dfec767cbff07ba672c4a52a1e7b28,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db644314604bcf215befbc76a6b03965430b4ecc290b0cbaba7a3c15267633e,PodSandboxId:c3ddb5bc86def5214d662a61894de18dbbb579acb1bd572ed5dcd941cf3a35de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761991326519253382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d350
723d124ead0cd31f3d1c37647032,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f941a7f22dde7843473651e155e7c1888e091ad7e81dd9176c1774d1f4181,PodSandboxId:c3066d4dc16238bdd3ee814977c3abc47eaf8700f661b3b9423760f91ef4d311,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761991326467159239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ef2d79ba485d072e8d760b6b93955,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7d663a3-ff85-455f-8d02-4e43981b33df name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.963780544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d45fb88-8107-4dc8-85e6-4c725d76614a name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.963848302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d45fb88-8107-4dc8-85e6-4c725d76614a name=/runtime.v1.RuntimeService/Version
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.964875581Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12c424c0-a343-4509-99e1-ff47e1898080 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.965336079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991345965313492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12c424c0-a343-4509-99e1-ff47e1898080 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.966075617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e078d6f-3801-49ac-a6ec-e567ee34e5fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.966289228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e078d6f-3801-49ac-a6ec-e567ee34e5fd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:02:25 test-preload-413642 crio[839]: time="2025-11-01 10:02:25.966596260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a161bffd6e7cf05b34d292c9abdd833ce8739d1f3c436aecc45166ee8af6c38e,PodSandboxId:33e2c8c81858cbe1905fbbde64f48326f53823f6e075e7f4d530e38fca28a022,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761991334709559283,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-g5l7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae6a77e-21a1-4ae2-8fd6-c3e685401bdf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11873d343bf67d4d698afd9c9d2d5f7b5596bc9fb9c24fa7b4d62fd645ddff0a,PodSandboxId:bbcf62f07187a2daa5b3678bd3cf6ecaab96783286c3cc7f79cc8c189a102011,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761991331068717235,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r9mqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: c77dbd17-18eb-425a-94cb-767f154ed69b,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6789656b36feff060332926a9e65ce551d5871d2527d6dbc8887d13a9bce6b8b,PodSandboxId:680b297f066b0d02191bc17591c09954127aba759c629fceb0c45a0bf90ccf5b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991331052301791,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
34922d-7147-436a-ace3-a11201d0bf49,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a4e592726e1ef2a3002a7ffd2f9823395b80775a5c180a8e95125f8dea3e60d,PodSandboxId:7c0561267d2ef44781e213b304b18077b9cc0a17ca95b41e8ba354fa529c3efc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761991326604064980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e785c8578
85fc58e9cafc656d1004cc7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb87466c9a0dfa153f6b83619b9c89a667e38853690ecdbf21dc74027eb18559,PodSandboxId:b628698dc9f11708227137fb2ff36416186e8ed0fa47d71ed7cb77b894643536,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761991326565093671,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 79dfec767cbff07ba672c4a52a1e7b28,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7db644314604bcf215befbc76a6b03965430b4ecc290b0cbaba7a3c15267633e,PodSandboxId:c3ddb5bc86def5214d662a61894de18dbbb579acb1bd572ed5dcd941cf3a35de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761991326519253382,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d350
723d124ead0cd31f3d1c37647032,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:647f941a7f22dde7843473651e155e7c1888e091ad7e81dd9176c1774d1f4181,PodSandboxId:c3066d4dc16238bdd3ee814977c3abc47eaf8700f661b3b9423760f91ef4d311,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761991326467159239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-413642,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 243ef2d79ba485d072e8d760b6b93955,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e078d6f-3801-49ac-a6ec-e567ee34e5fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a161bffd6e7cf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   33e2c8c81858c       coredns-668d6bf9bc-g5l7p
	11873d343bf67       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   bbcf62f07187a       kube-proxy-r9mqh
	6789656b36fef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   680b297f066b0       storage-provisioner
	7a4e592726e1e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   7c0561267d2ef       kube-apiserver-test-preload-413642
	eb87466c9a0df       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   b628698dc9f11       kube-controller-manager-test-preload-413642
	7db644314604b       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   c3ddb5bc86def       kube-scheduler-test-preload-413642
	647f941a7f22d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   c3066d4dc1623       etcd-test-preload-413642
	
	
	==> coredns [a161bffd6e7cf05b34d292c9abdd833ce8739d1f3c436aecc45166ee8af6c38e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36903 - 47040 "HINFO IN 7048567133997309155.6966807227581605886. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022348814s
	
	
	==> describe nodes <==
	Name:               test-preload-413642
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-413642
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=test-preload-413642
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_00_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:00:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-413642
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:02:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:02:11 +0000   Sat, 01 Nov 2025 10:00:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:02:11 +0000   Sat, 01 Nov 2025 10:00:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:02:11 +0000   Sat, 01 Nov 2025 10:00:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:02:11 +0000   Sat, 01 Nov 2025 10:02:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    test-preload-413642
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 83cb3be6abdb479dbf2b0d123971ae09
	  System UUID:                83cb3be6-abdb-479d-bf2b-0d123971ae09
	  Boot ID:                    29807440-3643-4962-bd31-76e921049d24
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-g5l7p                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-413642                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         102s
	  kube-system                 kube-apiserver-test-preload-413642             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-413642    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-r9mqh                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-test-preload-413642             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 93s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 101s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  100s               kubelet          Node test-preload-413642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s               kubelet          Node test-preload-413642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s               kubelet          Node test-preload-413642 status is now: NodeHasSufficientPID
	  Normal   NodeReady                100s               kubelet          Node test-preload-413642 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node test-preload-413642 event: Registered Node test-preload-413642 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-413642 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-413642 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-413642 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-413642 has been rebooted, boot id: 29807440-3643-4962-bd31-76e921049d24
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-413642 event: Registered Node test-preload-413642 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:01] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000368] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.944635] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086330] kauditd_printk_skb: 4 callbacks suppressed
	[Nov 1 10:02] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.520145] kauditd_printk_skb: 177 callbacks suppressed
	[  +3.118939] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [647f941a7f22dde7843473651e155e7c1888e091ad7e81dd9176c1774d1f4181] <==
	{"level":"info","ts":"2025-11-01T10:02:06.922414Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:02:06.924491Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"99b2d3c172539956","initial-advertise-peer-urls":["https://192.168.39.175:2380"],"listen-peer-urls":["https://192.168.39.175:2380"],"advertise-client-urls":["https://192.168.39.175:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.175:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:02:06.924525Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:02:06.923859Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.175:2380"}
	{"level":"info","ts":"2025-11-01T10:02:06.924563Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.175:2380"}
	{"level":"info","ts":"2025-11-01T10:02:06.907964Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:02:06.924576Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:02:06.924588Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:02:06.907878Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-01T10:02:08.668680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:02:08.668718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:02:08.668755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 received MsgPreVoteResp from 99b2d3c172539956 at term 2"}
	{"level":"info","ts":"2025-11-01T10:02:08.668768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:02:08.668774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 received MsgVoteResp from 99b2d3c172539956 at term 3"}
	{"level":"info","ts":"2025-11-01T10:02:08.668781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:02:08.668790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 99b2d3c172539956 elected leader 99b2d3c172539956 at term 3"}
	{"level":"info","ts":"2025-11-01T10:02:08.670594Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"99b2d3c172539956","local-member-attributes":"{Name:test-preload-413642 ClientURLs:[https://192.168.39.175:2379]}","request-path":"/0/members/99b2d3c172539956/attributes","cluster-id":"915d0614c2e3855c","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:02:08.670765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:02:08.671073Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:02:08.671115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:02:08.670817Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:02:08.672076Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T10:02:08.672103Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T10:02:08.672796Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.175:2379"}
	{"level":"info","ts":"2025-11-01T10:02:08.673090Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:02:26 up 0 min,  0 users,  load average: 1.21, 0.34, 0.12
	Linux test-preload-413642 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7a4e592726e1ef2a3002a7ffd2f9823395b80775a5c180a8e95125f8dea3e60d] <==
	I1101 10:02:09.872072       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:02:09.872093       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:02:09.872536       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1101 10:02:09.880537       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:02:09.880572       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:02:09.880579       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:02:09.888827       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1101 10:02:09.924079       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1101 10:02:09.924144       1 policy_source.go:240] refreshing policies
	I1101 10:02:09.938821       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:02:09.942868       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:02:09.946484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:02:09.950381       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 10:02:09.957545       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:02:09.994838       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:02:09.998756       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:02:10.636932       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1101 10:02:10.745595       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:02:11.376670       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1101 10:02:11.427240       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1101 10:02:11.461829       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:02:11.477726       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:02:13.101071       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:02:13.303368       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1101 10:02:13.351702       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [eb87466c9a0dfa153f6b83619b9c89a667e38853690ecdbf21dc74027eb18559] <==
	I1101 10:02:13.044249       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1101 10:02:13.047812       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1101 10:02:13.047852       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1101 10:02:13.049099       1 shared_informer.go:320] Caches are synced for service account
	I1101 10:02:13.050341       1 shared_informer.go:320] Caches are synced for attach detach
	I1101 10:02:13.050377       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1101 10:02:13.055744       1 shared_informer.go:320] Caches are synced for disruption
	I1101 10:02:13.056925       1 shared_informer.go:320] Caches are synced for namespace
	I1101 10:02:13.056980       1 shared_informer.go:320] Caches are synced for resource quota
	I1101 10:02:13.059438       1 shared_informer.go:320] Caches are synced for expand
	I1101 10:02:13.067845       1 shared_informer.go:320] Caches are synced for PV protection
	I1101 10:02:13.070150       1 shared_informer.go:320] Caches are synced for node
	I1101 10:02:13.070204       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:02:13.070753       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:02:13.070770       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1101 10:02:13.070778       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1101 10:02:13.070945       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-413642"
	I1101 10:02:13.082441       1 shared_informer.go:320] Caches are synced for job
	I1101 10:02:13.082584       1 shared_informer.go:320] Caches are synced for daemon sets
	I1101 10:02:13.083176       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 10:02:13.310441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="308.408439ms"
	I1101 10:02:13.311325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.859µs"
	I1101 10:02:15.774495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="43.191µs"
	I1101 10:02:17.575860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.0271ms"
	I1101 10:02:17.577524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.454µs"
	
	
	==> kube-proxy [11873d343bf67d4d698afd9c9d2d5f7b5596bc9fb9c24fa7b4d62fd645ddff0a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1101 10:02:11.375884       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1101 10:02:11.392415       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.175"]
	E1101 10:02:11.392931       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:02:11.473362       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1101 10:02:11.473431       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:02:11.473466       1 server_linux.go:170] "Using iptables Proxier"
	I1101 10:02:11.484046       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:02:11.484510       1 server.go:497] "Version info" version="v1.32.0"
	I1101 10:02:11.484554       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:02:11.486316       1 config.go:199] "Starting service config controller"
	I1101 10:02:11.486370       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1101 10:02:11.486400       1 config.go:105] "Starting endpoint slice config controller"
	I1101 10:02:11.486404       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1101 10:02:11.488663       1 config.go:329] "Starting node config controller"
	I1101 10:02:11.488712       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1101 10:02:11.586549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1101 10:02:11.586566       1 shared_informer.go:320] Caches are synced for service config
	I1101 10:02:11.589707       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7db644314604bcf215befbc76a6b03965430b4ecc290b0cbaba7a3c15267633e] <==
	I1101 10:02:07.507905       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:02:09.791885       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:02:09.792710       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:02:09.792777       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:02:09.792800       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:02:09.911722       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1101 10:02:09.911770       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:02:09.915577       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:02:09.915702       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:02:09.919728       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 10:02:09.919820       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:02:10.016011       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.017327    1161 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-413642\" already exists" pod="kube-system/kube-controller-manager-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.017355    1161 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.038535    1161 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-413642\" already exists" pod="kube-system/kube-scheduler-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.039012    1161 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.051359    1161 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-413642\" already exists" pod="kube-system/etcd-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.051404    1161 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.060687    1161 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-413642\" already exists" pod="kube-system/kube-apiserver-test-preload-413642"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.540431    1161 apiserver.go:52] "Watching apiserver"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.546921    1161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-g5l7p" podUID="aae6a77e-21a1-4ae2-8fd6-c3e685401bdf"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.583467    1161 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.632332    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c77dbd17-18eb-425a-94cb-767f154ed69b-xtables-lock\") pod \"kube-proxy-r9mqh\" (UID: \"c77dbd17-18eb-425a-94cb-767f154ed69b\") " pod="kube-system/kube-proxy-r9mqh"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.632381    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7a34922d-7147-436a-ace3-a11201d0bf49-tmp\") pod \"storage-provisioner\" (UID: \"7a34922d-7147-436a-ace3-a11201d0bf49\") " pod="kube-system/storage-provisioner"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: I1101 10:02:10.632408    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c77dbd17-18eb-425a-94cb-767f154ed69b-lib-modules\") pod \"kube-proxy-r9mqh\" (UID: \"c77dbd17-18eb-425a-94cb-767f154ed69b\") " pod="kube-system/kube-proxy-r9mqh"
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.632785    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:02:10 test-preload-413642 kubelet[1161]: E1101 10:02:10.632850    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume podName:aae6a77e-21a1-4ae2-8fd6-c3e685401bdf nodeName:}" failed. No retries permitted until 2025-11-01 10:02:11.132830499 +0000 UTC m=+6.703421483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume") pod "coredns-668d6bf9bc-g5l7p" (UID: "aae6a77e-21a1-4ae2-8fd6-c3e685401bdf") : object "kube-system"/"coredns" not registered
	Nov 01 10:02:11 test-preload-413642 kubelet[1161]: E1101 10:02:11.135441    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:02:11 test-preload-413642 kubelet[1161]: E1101 10:02:11.135514    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume podName:aae6a77e-21a1-4ae2-8fd6-c3e685401bdf nodeName:}" failed. No retries permitted until 2025-11-01 10:02:12.135500956 +0000 UTC m=+7.706091924 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume") pod "coredns-668d6bf9bc-g5l7p" (UID: "aae6a77e-21a1-4ae2-8fd6-c3e685401bdf") : object "kube-system"/"coredns" not registered
	Nov 01 10:02:11 test-preload-413642 kubelet[1161]: I1101 10:02:11.510532    1161 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 01 10:02:12 test-preload-413642 kubelet[1161]: E1101 10:02:12.143239    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:02:12 test-preload-413642 kubelet[1161]: E1101 10:02:12.143327    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume podName:aae6a77e-21a1-4ae2-8fd6-c3e685401bdf nodeName:}" failed. No retries permitted until 2025-11-01 10:02:14.143307634 +0000 UTC m=+9.713898590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aae6a77e-21a1-4ae2-8fd6-c3e685401bdf-config-volume") pod "coredns-668d6bf9bc-g5l7p" (UID: "aae6a77e-21a1-4ae2-8fd6-c3e685401bdf") : object "kube-system"/"coredns" not registered
	Nov 01 10:02:14 test-preload-413642 kubelet[1161]: E1101 10:02:14.643894    1161 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991334643528049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 10:02:14 test-preload-413642 kubelet[1161]: E1101 10:02:14.643915    1161 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991334643528049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 10:02:16 test-preload-413642 kubelet[1161]: I1101 10:02:16.761183    1161 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:02:24 test-preload-413642 kubelet[1161]: E1101 10:02:24.646427    1161 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991344645463376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 10:02:24 test-preload-413642 kubelet[1161]: E1101 10:02:24.646451    1161 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991344645463376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6789656b36feff060332926a9e65ce551d5871d2527d6dbc8887d13a9bce6b8b] <==
	I1101 10:02:11.203899       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-413642 -n test-preload-413642
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-413642 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-413642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-413642
--- FAIL: TestPreload (155.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-533709 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-533709 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.931620767s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-533709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-533709" primary control-plane node in "pause-533709" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-533709" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:09:48.292198  572974 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:09:48.292539  572974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:09:48.292552  572974 out.go:374] Setting ErrFile to fd 2...
	I1101 10:09:48.292560  572974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:09:48.292825  572974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 10:09:48.293377  572974 out.go:368] Setting JSON to false
	I1101 10:09:48.294457  572974 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67910,"bootTime":1761923878,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:09:48.294554  572974 start.go:143] virtualization: kvm guest
	I1101 10:09:48.296782  572974 out.go:179] * [pause-533709] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:09:48.298658  572974 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:09:48.298683  572974 notify.go:221] Checking for updates...
	I1101 10:09:48.301711  572974 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:09:48.303064  572974 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:09:48.304455  572974 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:09:48.306276  572974 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:09:48.307396  572974 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:09:48.308877  572974 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:09:48.309509  572974 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:09:48.353655  572974 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:09:48.354914  572974 start.go:309] selected driver: kvm2
	I1101 10:09:48.354954  572974 start.go:930] validating driver "kvm2" against &{Name:pause-533709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-533709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:09:48.355156  572974 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:09:48.356563  572974 cni.go:84] Creating CNI manager for ""
	I1101 10:09:48.356649  572974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:09:48.356723  572974 start.go:353] cluster config:
	{Name:pause-533709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-533709 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:09:48.356935  572974 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:09:48.358704  572974 out.go:179] * Starting "pause-533709" primary control-plane node in "pause-533709" cluster
	I1101 10:09:48.359872  572974 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:09:48.359924  572974 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:09:48.359932  572974 cache.go:59] Caching tarball of preloaded images
	I1101 10:09:48.360020  572974 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:09:48.360033  572974 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:09:48.360172  572974 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/config.json ...
	I1101 10:09:48.360414  572974 start.go:360] acquireMachinesLock for pause-533709: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:09:48.600780  572974 start.go:364] duration metric: took 240.314512ms to acquireMachinesLock for "pause-533709"
	I1101 10:09:48.600838  572974 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:09:48.600847  572974 fix.go:54] fixHost starting: 
	I1101 10:09:48.604409  572974 fix.go:112] recreateIfNeeded on pause-533709: state=Running err=<nil>
	W1101 10:09:48.604438  572974 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:09:48.606744  572974 out.go:252] * Updating the running kvm2 "pause-533709" VM ...
	I1101 10:09:48.606786  572974 machine.go:94] provisionDockerMachine start ...
	I1101 10:09:48.611936  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.612504  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:48.612547  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.612845  572974 main.go:143] libmachine: Using SSH client type: native
	I1101 10:09:48.613204  572974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 10:09:48.613228  572974 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:09:48.743069  572974 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-533709
	
	I1101 10:09:48.743108  572974 buildroot.go:166] provisioning hostname "pause-533709"
	I1101 10:09:48.747658  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.748316  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:48.748361  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.748671  572974 main.go:143] libmachine: Using SSH client type: native
	I1101 10:09:48.749045  572974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 10:09:48.749071  572974 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-533709 && echo "pause-533709" | sudo tee /etc/hostname
	I1101 10:09:48.903039  572974 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-533709
	
	I1101 10:09:48.907026  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.907595  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:48.907657  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:48.907948  572974 main.go:143] libmachine: Using SSH client type: native
	I1101 10:09:48.908239  572974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 10:09:48.908258  572974 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-533709' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-533709/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-533709' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:09:49.039363  572974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:09:49.039404  572974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:09:49.039447  572974 buildroot.go:174] setting up certificates
	I1101 10:09:49.039463  572974 provision.go:84] configureAuth start
	I1101 10:09:49.043081  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.043646  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:49.043673  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.047430  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.047871  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:49.047910  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.048088  572974 provision.go:143] copyHostCerts
	I1101 10:09:49.048161  572974 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:09:49.048186  572974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:09:49.048269  572974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:09:49.048444  572974 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:09:49.048460  572974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:09:49.048501  572974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:09:49.048596  572974 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:09:49.048608  572974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:09:49.048641  572974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:09:49.048719  572974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.pause-533709 san=[127.0.0.1 192.168.61.122 localhost minikube pause-533709]
	I1101 10:09:49.643251  572974 provision.go:177] copyRemoteCerts
	I1101 10:09:49.643321  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:09:49.646694  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.647307  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:49.647348  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.647564  572974 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/pause-533709/id_rsa Username:docker}
	I1101 10:09:49.744846  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:09:49.785180  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:09:49.823976  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:09:49.870811  572974 provision.go:87] duration metric: took 831.32465ms to configureAuth
	I1101 10:09:49.870853  572974 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:09:49.871188  572974 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:09:49.874315  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.874755  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:49.874782  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:49.875138  572974 main.go:143] libmachine: Using SSH client type: native
	I1101 10:09:49.875402  572974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 10:09:49.875434  572974 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:09:55.966352  572974 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:09:55.966393  572974 machine.go:97] duration metric: took 7.35959456s to provisionDockerMachine
	I1101 10:09:55.966408  572974 start.go:293] postStartSetup for "pause-533709" (driver="kvm2")
	I1101 10:09:55.966461  572974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:09:55.966566  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:09:55.970567  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:55.971137  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:55.971174  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:55.971412  572974 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/pause-533709/id_rsa Username:docker}
	I1101 10:09:56.069476  572974 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:09:56.076801  572974 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:09:56.076835  572974 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:09:56.076924  572974 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:09:56.077033  572974 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:09:56.077177  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:09:56.095686  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:09:56.135363  572974 start.go:296] duration metric: took 168.937661ms for postStartSetup
	I1101 10:09:56.135415  572974 fix.go:56] duration metric: took 7.534568193s for fixHost
	I1101 10:09:56.138623  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.139173  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:56.139228  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.139449  572974 main.go:143] libmachine: Using SSH client type: native
	I1101 10:09:56.139702  572974 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.122 22 <nil> <nil>}
	I1101 10:09:56.139717  572974 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:09:56.259780  572974 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991796.252131340
	
	I1101 10:09:56.259805  572974 fix.go:216] guest clock: 1761991796.252131340
	I1101 10:09:56.259813  572974 fix.go:229] Guest: 2025-11-01 10:09:56.25213134 +0000 UTC Remote: 2025-11-01 10:09:56.135420781 +0000 UTC m=+7.909962104 (delta=116.710559ms)
	I1101 10:09:56.259831  572974 fix.go:200] guest clock delta is within tolerance: 116.710559ms
	I1101 10:09:56.259837  572974 start.go:83] releasing machines lock for "pause-533709", held for 7.659020322s
	I1101 10:09:56.263566  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.264065  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:56.264091  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.264752  572974 ssh_runner.go:195] Run: cat /version.json
	I1101 10:09:56.264824  572974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:09:56.268147  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.268430  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.268593  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:56.268627  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.268789  572974 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/pause-533709/id_rsa Username:docker}
	I1101 10:09:56.269020  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:56.269050  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:56.269224  572974 sshutil.go:53] new ssh client: &{IP:192.168.61.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/pause-533709/id_rsa Username:docker}
	I1101 10:09:56.365293  572974 ssh_runner.go:195] Run: systemctl --version
	I1101 10:09:56.390953  572974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:09:56.545926  572974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:09:56.555209  572974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:09:56.555300  572974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:09:56.575106  572974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:09:56.575135  572974 start.go:496] detecting cgroup driver to use...
	I1101 10:09:56.575206  572974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:09:56.605231  572974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:09:56.631616  572974 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:09:56.631680  572974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:09:56.656820  572974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:09:56.677412  572974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:09:56.881737  572974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:09:57.075055  572974 docker.go:234] disabling docker service ...
	I1101 10:09:57.075144  572974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:09:57.105973  572974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:09:57.124739  572974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:09:57.319106  572974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:09:57.513953  572974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:09:57.560583  572974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:09:57.592517  572974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:09:57.592609  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.619966  572974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:09:57.620055  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.643445  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.657149  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.670688  572974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:09:57.685073  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.722574  572974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.738925  572974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:09:57.752613  572974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:09:57.789871  572974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:09:57.816739  572974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:09:58.185873  572974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:09:58.552952  572974 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:09:58.553038  572974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:09:58.560855  572974 start.go:564] Will wait 60s for crictl version
	I1101 10:09:58.560945  572974 ssh_runner.go:195] Run: which crictl
	I1101 10:09:58.566623  572974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:09:58.617297  572974 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:09:58.617407  572974 ssh_runner.go:195] Run: crio --version
	I1101 10:09:58.653643  572974 ssh_runner.go:195] Run: crio --version
	I1101 10:09:58.688468  572974 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 10:09:58.693344  572974 main.go:143] libmachine: domain pause-533709 has defined MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:58.693857  572974 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6c:d5:2b", ip: ""} in network mk-pause-533709: {Iface:virbr3 ExpiryTime:2025-11-01 11:08:41 +0000 UTC Type:0 Mac:52:54:00:6c:d5:2b Iaid: IPaddr:192.168.61.122 Prefix:24 Hostname:pause-533709 Clientid:01:52:54:00:6c:d5:2b}
	I1101 10:09:58.693889  572974 main.go:143] libmachine: domain pause-533709 has defined IP address 192.168.61.122 and MAC address 52:54:00:6c:d5:2b in network mk-pause-533709
	I1101 10:09:58.694177  572974 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 10:09:58.699454  572974 kubeadm.go:884] updating cluster {Name:pause-533709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-533709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:09:58.699641  572974 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:09:58.699710  572974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:09:58.765403  572974 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:09:58.765428  572974 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:09:58.765483  572974 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:09:58.811550  572974 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:09:58.811581  572974 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:09:58.811592  572974 kubeadm.go:935] updating node { 192.168.61.122 8443 v1.34.1 crio true true} ...
	I1101 10:09:58.811778  572974 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-533709 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-533709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:09:58.811886  572974 ssh_runner.go:195] Run: crio config
	I1101 10:09:58.881786  572974 cni.go:84] Creating CNI manager for ""
	I1101 10:09:58.881811  572974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:09:58.881840  572974 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:09:58.881869  572974 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.122 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-533709 NodeName:pause-533709 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:09:58.882033  572974 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-533709"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:09:58.882111  572974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:09:58.896797  572974 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:09:58.896950  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:09:58.913502  572974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 10:09:58.942048  572974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:09:58.967499  572974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:09:58.999943  572974 ssh_runner.go:195] Run: grep 192.168.61.122	control-plane.minikube.internal$ /etc/hosts
	I1101 10:09:59.007123  572974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:09:59.211040  572974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:09:59.233791  572974 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709 for IP: 192.168.61.122
	I1101 10:09:59.233821  572974 certs.go:195] generating shared ca certs ...
	I1101 10:09:59.233844  572974 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:09:59.234056  572974 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 10:09:59.234112  572974 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 10:09:59.234127  572974 certs.go:257] generating profile certs ...
	I1101 10:09:59.234252  572974 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/client.key
	I1101 10:09:59.234362  572974 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/apiserver.key.9533d895
	I1101 10:09:59.234431  572974 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/proxy-client.key
	I1101 10:09:59.234577  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem (1338 bytes)
	W1101 10:09:59.234618  572974 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515_empty.pem, impossibly tiny 0 bytes
	I1101 10:09:59.234645  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:09:59.234685  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:09:59.234722  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:09:59.234754  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 10:09:59.234808  572974 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:09:59.235471  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:09:59.279810  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:09:59.315764  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:09:59.347643  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:09:59.384030  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:09:59.429265  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:09:59.471332  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:09:59.516039  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:09:59.551857  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem --> /usr/share/ca-certificates/534515.pem (1338 bytes)
	I1101 10:09:59.664795  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /usr/share/ca-certificates/5345152.pem (1708 bytes)
	I1101 10:09:59.757021  572974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:09:59.845147  572974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:09:59.911038  572974 ssh_runner.go:195] Run: openssl version
	I1101 10:09:59.932273  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5345152.pem && ln -fs /usr/share/ca-certificates/5345152.pem /etc/ssl/certs/5345152.pem"
	I1101 10:09:59.969064  572974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5345152.pem
	I1101 10:09:59.982033  572974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:07 /usr/share/ca-certificates/5345152.pem
	I1101 10:09:59.982107  572974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5345152.pem
	I1101 10:09:59.993408  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5345152.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:10:00.007722  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:10:00.034419  572974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:00.040975  572974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:00.041063  572974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:00.051776  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:10:00.082350  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534515.pem && ln -fs /usr/share/ca-certificates/534515.pem /etc/ssl/certs/534515.pem"
	I1101 10:10:00.104798  572974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534515.pem
	I1101 10:10:00.112046  572974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:07 /usr/share/ca-certificates/534515.pem
	I1101 10:10:00.112120  572974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534515.pem
	I1101 10:10:00.131350  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534515.pem /etc/ssl/certs/51391683.0"
	I1101 10:10:00.157841  572974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:10:00.165226  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:10:00.176542  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:10:00.187598  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:10:00.198742  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:10:00.209501  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:10:00.219365  572974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:10:00.231488  572974 kubeadm.go:401] StartCluster: {Name:pause-533709 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-533709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:10:00.231662  572974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:10:00.231737  572974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:10:00.289279  572974 cri.go:89] found id: "b993b8fbb2d6a2d30b60ce04571b393da5a12345208c74d4d9c42e72514262a7"
	I1101 10:10:00.289309  572974 cri.go:89] found id: "aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221"
	I1101 10:10:00.289316  572974 cri.go:89] found id: "8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e"
	I1101 10:10:00.289320  572974 cri.go:89] found id: "e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc"
	I1101 10:10:00.289324  572974 cri.go:89] found id: "ef07da579d17b52e1a0742051e372b41138b1757f7db2f1e07f610c931786d48"
	I1101 10:10:00.289328  572974 cri.go:89] found id: "877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458"
	I1101 10:10:00.289331  572974 cri.go:89] found id: "c0cfd92e39c4b4eb46f69dd319f6066ef6ffe756c45bdd88862b3a2727126531"
	I1101 10:10:00.289335  572974 cri.go:89] found id: ""
	I1101 10:10:00.289398  572974 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-533709 -n pause-533709
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-533709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-533709 logs -n 25: (1.589188893s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-242892 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo containerd config dump                                                                                                                                                                                                │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo crio config                                                                                                                                                                                                           │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ delete  │ -p cilium-242892                                                                                                                                                                                                                            │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │ 01 Nov 25 10:07 UTC │
	│ start   │ -p guest-930796 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-930796              │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │ 01 Nov 25 10:08 UTC │
	│ ssh     │ -p NoKubernetes-336039 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-336039       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │                     │
	│ delete  │ -p force-systemd-env-940638                                                                                                                                                                                                                 │ force-systemd-env-940638  │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ delete  │ -p NoKubernetes-336039                                                                                                                                                                                                                      │ NoKubernetes-336039       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ start   │ -p pause-533709 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-533709              │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p cert-expiration-734989 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-734989    │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p force-systemd-flag-360782 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ delete  │ -p kubernetes-upgrade-353156                                                                                                                                                                                                                │ kubernetes-upgrade-353156 │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ start   │ -p cert-options-476227 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:10 UTC │
	│ start   │ -p pause-533709 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-533709              │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:10 UTC │
	│ ssh     │ force-systemd-flag-360782 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:09 UTC │
	│ delete  │ -p force-systemd-flag-360782                                                                                                                                                                                                                │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-080837    │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │                     │
	│ ssh     │ cert-options-476227 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ ssh     │ -p cert-options-476227 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ delete  │ -p cert-options-476227                                                                                                                                                                                                                      │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-468183        │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:10:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:10:11.364626  573367 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:10:11.365063  573367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:10:11.365081  573367 out.go:374] Setting ErrFile to fd 2...
	I1101 10:10:11.365088  573367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:10:11.365448  573367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 10:10:11.366192  573367 out.go:368] Setting JSON to false
	I1101 10:10:11.367576  573367 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67933,"bootTime":1761923878,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:10:11.367715  573367 start.go:143] virtualization: kvm guest
	I1101 10:10:11.369993  573367 out.go:179] * [embed-certs-468183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:10:11.371301  573367 notify.go:221] Checking for updates...
	I1101 10:10:11.371309  573367 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:10:11.372738  573367 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:10:11.374284  573367 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:10:11.375630  573367 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:11.377032  573367 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:10:11.378315  573367 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:10:11.380283  573367 config.go:182] Loaded profile config "cert-expiration-734989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:11.380438  573367 config.go:182] Loaded profile config "guest-930796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:10:11.380579  573367 config.go:182] Loaded profile config "old-k8s-version-080837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:10:11.380762  573367 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:11.380920  573367 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:10:11.420791  573367 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:10:11.421944  573367 start.go:309] selected driver: kvm2
	I1101 10:10:11.421961  573367 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:10:11.421977  573367 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:10:11.422790  573367 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:10:11.423119  573367 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:10:11.423167  573367 cni.go:84] Creating CNI manager for ""
	I1101 10:10:11.423226  573367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:11.423237  573367 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 10:10:11.423312  573367 start.go:353] cluster config:
	{Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:10:11.423431  573367 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:10:11.425029  573367 out.go:179] * Starting "embed-certs-468183" primary control-plane node in "embed-certs-468183" cluster
	I1101 10:10:09.764395  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:09.765161  573081 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-080837 (source=lease)
	I1101 10:10:09.765197  573081 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:09.765552  573081 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-080837 in network mk-old-k8s-version-080837 (interfaces detected: [])
	I1101 10:10:09.765595  573081 retry.go:31] will retry after 2.291513508s: waiting for domain to come up
	I1101 10:10:12.060133  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:12.060696  573081 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-080837 (source=lease)
	I1101 10:10:12.060713  573081 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:12.061152  573081 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-080837 in network mk-old-k8s-version-080837 (interfaces detected: [])
	I1101 10:10:12.061193  573081 retry.go:31] will retry after 4.280629345s: waiting for domain to come up
	I1101 10:10:13.268027  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:10:13.268103  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:11.426098  573367 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:10:11.426139  573367 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:10:11.426156  573367 cache.go:59] Caching tarball of preloaded images
	I1101 10:10:11.426253  573367 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:10:11.426268  573367 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:10:11.426394  573367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json ...
	I1101 10:10:11.426423  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json: {Name:mk0bcfbbdec7330a8609a2b3f9b6e2b8348c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:11.426600  573367 start.go:360] acquireMachinesLock for embed-certs-468183: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:10:16.345353  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.346270  573081 main.go:143] libmachine: domain old-k8s-version-080837 has current primary IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.346297  573081 main.go:143] libmachine: found domain IP: 192.168.50.181
	I1101 10:10:16.346332  573081 main.go:143] libmachine: reserving static IP address...
	I1101 10:10:16.346926  573081 main.go:143] libmachine: unable to find host DHCP lease matching {name: "old-k8s-version-080837", mac: "52:54:00:ba:e1:24", ip: "192.168.50.181"} in network mk-old-k8s-version-080837
	I1101 10:10:16.586855  573081 main.go:143] libmachine: reserved static IP address 192.168.50.181 for domain old-k8s-version-080837
	I1101 10:10:16.586921  573081 main.go:143] libmachine: waiting for SSH...
	I1101 10:10:16.586930  573081 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:10:16.590543  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.591226  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.591290  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.591515  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.591759  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.591774  573081 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:10:16.706389  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:16.706793  573081 main.go:143] libmachine: domain creation complete
	I1101 10:10:16.708421  573081 machine.go:94] provisionDockerMachine start ...
	I1101 10:10:16.710805  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.711295  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.711329  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.711504  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.711702  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.711712  573081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:10:16.832430  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:10:16.832465  573081 buildroot.go:166] provisioning hostname "old-k8s-version-080837"
	I1101 10:10:16.836364  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.836845  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.836880  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.837117  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.837359  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.837376  573081 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-080837 && echo "old-k8s-version-080837" | sudo tee /etc/hostname
	I1101 10:10:16.963511  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-080837
	
	I1101 10:10:16.966816  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.967248  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.967280  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.967441  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.967636  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.967653  573081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-080837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-080837/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-080837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:10:17.086146  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:17.086182  573081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:10:17.086209  573081 buildroot.go:174] setting up certificates
	I1101 10:10:17.086224  573081 provision.go:84] configureAuth start
	I1101 10:10:17.089520  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.089953  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.089976  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092254  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092685  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.092707  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092845  573081 provision.go:143] copyHostCerts
	I1101 10:10:17.092909  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:10:17.092928  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:10:17.093008  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:10:17.093127  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:10:17.093137  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:10:17.093175  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:10:17.093294  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:10:17.093309  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:10:17.093345  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:10:17.093458  573081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-080837 san=[127.0.0.1 192.168.50.181 localhost minikube old-k8s-version-080837]
	I1101 10:10:17.269296  573081 provision.go:177] copyRemoteCerts
	I1101 10:10:17.269362  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:10:17.272052  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.272467  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.272501  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.272719  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.361036  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:10:17.396585  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:10:17.429454  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:10:17.459991  573081 provision.go:87] duration metric: took 373.747673ms to configureAuth
	I1101 10:10:17.460035  573081 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:10:17.460233  573081 config.go:182] Loaded profile config "old-k8s-version-080837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:10:17.463173  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.463626  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.463661  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.463835  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:17.464056  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:17.464070  573081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:10:17.986061  573367 start.go:364] duration metric: took 6.559419933s to acquireMachinesLock for "embed-certs-468183"
	I1101 10:10:17.986142  573367 start.go:93] Provisioning new machine with config: &{Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:10:17.986284  573367 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 10:10:18.272137  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:10:18.272182  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:17.725827  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:10:17.725862  573081 machine.go:97] duration metric: took 1.017420655s to provisionDockerMachine
	I1101 10:10:17.725876  573081 client.go:176] duration metric: took 21.394411039s to LocalClient.Create
	I1101 10:10:17.725921  573081 start.go:167] duration metric: took 21.394505597s to libmachine.API.Create "old-k8s-version-080837"
	I1101 10:10:17.725934  573081 start.go:293] postStartSetup for "old-k8s-version-080837" (driver="kvm2")
	I1101 10:10:17.725949  573081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:10:17.726033  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:10:17.729293  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.729794  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.729825  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.730074  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.816173  573081 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:10:17.821814  573081 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:10:17.821846  573081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:10:17.821973  573081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:10:17.822224  573081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:10:17.822367  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:10:17.835768  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:17.871800  573081 start.go:296] duration metric: took 145.83832ms for postStartSetup
	I1101 10:10:17.875584  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.876112  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.876138  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.876364  573081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/config.json ...
	I1101 10:10:17.876549  573081 start.go:128] duration metric: took 21.616405598s to createHost
	I1101 10:10:17.879051  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.879398  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.879422  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.879611  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:17.879845  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:17.879857  573081 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:10:17.985857  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991817.946271015
	
	I1101 10:10:17.985885  573081 fix.go:216] guest clock: 1761991817.946271015
	I1101 10:10:17.985917  573081 fix.go:229] Guest: 2025-11-01 10:10:17.946271015 +0000 UTC Remote: 2025-11-01 10:10:17.876561504 +0000 UTC m=+25.257838208 (delta=69.709511ms)
	I1101 10:10:17.985941  573081 fix.go:200] guest clock delta is within tolerance: 69.709511ms
	I1101 10:10:17.985948  573081 start.go:83] releasing machines lock for "old-k8s-version-080837", held for 21.725953833s
	I1101 10:10:17.989673  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.990083  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.990108  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.990843  573081 ssh_runner.go:195] Run: cat /version.json
	I1101 10:10:17.990946  573081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:10:17.994413  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.994811  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.994854  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.994879  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.995073  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.995431  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.995462  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.995642  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:18.080624  573081 ssh_runner.go:195] Run: systemctl --version
	I1101 10:10:18.107012  573081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:10:18.281148  573081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:10:18.289622  573081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:10:18.289696  573081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:10:18.312384  573081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:10:18.312416  573081 start.go:496] detecting cgroup driver to use...
	I1101 10:10:18.312503  573081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:10:18.338921  573081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:10:18.357960  573081 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:10:18.358028  573081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:10:18.377290  573081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:10:18.395398  573081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:10:18.544803  573081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:10:18.753999  573081 docker.go:234] disabling docker service ...
	I1101 10:10:18.754082  573081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:10:18.771271  573081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:10:18.789543  573081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:10:18.961955  573081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:10:19.113131  573081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:10:19.130117  573081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:10:19.157381  573081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:10:19.157450  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.171518  573081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:10:19.171594  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.185163  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.198466  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.211935  573081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:10:19.225604  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.238311  573081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.263611  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.277406  573081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:10:19.290333  573081 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:10:19.290412  573081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:10:19.320599  573081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:10:19.338965  573081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:19.499507  573081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:10:19.634883  573081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:10:19.634993  573081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:10:19.643768  573081 start.go:564] Will wait 60s for crictl version
	I1101 10:10:19.643844  573081 ssh_runner.go:195] Run: which crictl
	I1101 10:10:19.650542  573081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:10:19.694886  573081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:10:19.695015  573081 ssh_runner.go:195] Run: crio --version
	I1101 10:10:19.727446  573081 ssh_runner.go:195] Run: crio --version
	I1101 10:10:19.762499  573081 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I1101 10:10:17.989074  573367 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 10:10:17.989313  573367 start.go:159] libmachine.API.Create for "embed-certs-468183" (driver="kvm2")
	I1101 10:10:17.989355  573367 client.go:173] LocalClient.Create starting
	I1101 10:10:17.989463  573367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 10:10:17.989508  573367 main.go:143] libmachine: Decoding PEM data...
	I1101 10:10:17.989530  573367 main.go:143] libmachine: Parsing certificate...
	I1101 10:10:17.989629  573367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 10:10:17.989660  573367 main.go:143] libmachine: Decoding PEM data...
	I1101 10:10:17.989677  573367 main.go:143] libmachine: Parsing certificate...
	I1101 10:10:17.990097  573367 main.go:143] libmachine: creating domain...
	I1101 10:10:17.990108  573367 main.go:143] libmachine: creating network...
	I1101 10:10:17.992092  573367 main.go:143] libmachine: found existing default network
	I1101 10:10:17.992356  573367 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:17.993735  573367 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:f3:d3} reservation:<nil>}
	I1101 10:10:17.994824  573367 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f3:08:19} reservation:<nil>}
	I1101 10:10:17.995662  573367 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:bf:46:19} reservation:<nil>}
	I1101 10:10:17.996437  573367 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:d7:d1} reservation:<nil>}
	I1101 10:10:17.997508  573367 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df4d60}
	I1101 10:10:17.997600  573367 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-embed-certs-468183</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:18.003466  573367 main.go:143] libmachine: creating private network mk-embed-certs-468183 192.168.83.0/24...
	I1101 10:10:18.086847  573367 main.go:143] libmachine: private network mk-embed-certs-468183 192.168.83.0/24 created
	I1101 10:10:18.087288  573367 main.go:143] libmachine: <network>
	  <name>mk-embed-certs-468183</name>
	  <uuid>2f64b0ed-277f-4c5b-a247-cd3f68bf3b08</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:c7:87:de'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:18.087324  573367 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 ...
	I1101 10:10:18.087346  573367 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 10:10:18.087358  573367 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:18.087431  573367 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 10:10:18.353660  573367 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa...
	I1101 10:10:18.795276  573367 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk...
	I1101 10:10:18.795324  573367 main.go:143] libmachine: Writing magic tar header
	I1101 10:10:18.795343  573367 main.go:143] libmachine: Writing SSH key tar header
	I1101 10:10:18.795418  573367 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 ...
	I1101 10:10:18.795475  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183
	I1101 10:10:18.795498  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 (perms=drwx------)
	I1101 10:10:18.795508  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 10:10:18.795517  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 10:10:18.795527  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:18.795544  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 10:10:18.795557  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 10:10:18.795568  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 10:10:18.795581  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 10:10:18.795591  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 10:10:18.795601  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 10:10:18.795611  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 10:10:18.795620  573367 main.go:143] libmachine: checking permissions on dir: /home
	I1101 10:10:18.795627  573367 main.go:143] libmachine: skipping /home - not owner
	I1101 10:10:18.795634  573367 main.go:143] libmachine: defining domain...
	I1101 10:10:18.797340  573367 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>embed-certs-468183</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-embed-certs-468183'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:10:18.802987  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:ff:1a:65 in network default
	I1101 10:10:18.803645  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:18.803665  573367 main.go:143] libmachine: starting domain...
	I1101 10:10:18.803670  573367 main.go:143] libmachine: ensuring networks are active...
	I1101 10:10:18.804515  573367 main.go:143] libmachine: Ensuring network default is active
	I1101 10:10:18.804971  573367 main.go:143] libmachine: Ensuring network mk-embed-certs-468183 is active
	I1101 10:10:18.805661  573367 main.go:143] libmachine: getting domain XML...
	I1101 10:10:18.806958  573367 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>embed-certs-468183</name>
	  <uuid>6ea47518-574e-4d5a-8064-0f7a13089d7d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:78:7b:11'/>
	      <source network='mk-embed-certs-468183'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ff:1a:65'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:10:20.252853  573367 main.go:143] libmachine: waiting for domain to start...
	I1101 10:10:20.254781  573367 main.go:143] libmachine: domain is now running
	I1101 10:10:20.254804  573367 main.go:143] libmachine: waiting for IP...
	I1101 10:10:20.255969  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.256680  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.256698  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.257194  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.257263  573367 retry.go:31] will retry after 206.751059ms: waiting for domain to come up
	I1101 10:10:20.466253  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.467196  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.467220  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.467630  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.467671  573367 retry.go:31] will retry after 333.946985ms: waiting for domain to come up
	I1101 10:10:20.803276  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.804124  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.804142  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.804639  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.804686  573367 retry.go:31] will retry after 314.04737ms: waiting for domain to come up
	I1101 10:10:21.120435  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:21.121425  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:21.121446  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:21.121972  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:21.122021  573367 retry.go:31] will retry after 547.50417ms: waiting for domain to come up
	I1101 10:10:19.767366  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:19.767857  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:19.767886  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:19.768089  573081 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 10:10:19.773448  573081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:19.791035  573081 kubeadm.go:884] updating cluster {Name:old-k8s-version-080837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:10:19.791220  573081 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:10:19.791293  573081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:19.835612  573081 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1101 10:10:19.835705  573081 ssh_runner.go:195] Run: which lz4
	I1101 10:10:19.840655  573081 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:10:19.846052  573081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:10:19.846078  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1101 10:10:21.889979  573081 crio.go:462] duration metric: took 2.04936871s to copy over tarball
	I1101 10:10:21.890092  573081 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:10:21.799791  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": read tcp 192.168.61.1:49794->192.168.61.122:8443: read: connection reset by peer
	I1101 10:10:21.799861  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:21.800492  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:22.261161  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:22.261999  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:22.760787  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:22.761717  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:23.261151  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:23.261928  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:21.671178  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:21.671975  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:21.672003  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:21.672489  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:21.672541  573367 retry.go:31] will retry after 476.816581ms: waiting for domain to come up
	I1101 10:10:22.151496  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:22.152390  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:22.152415  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:22.152947  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:22.152999  573367 retry.go:31] will retry after 833.583613ms: waiting for domain to come up
	I1101 10:10:22.990847  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:22.993494  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:22.993526  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:22.994107  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:22.994159  573367 retry.go:31] will retry after 1.018529043s: waiting for domain to come up
	I1101 10:10:24.014529  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:24.015417  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:24.015441  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:24.015925  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:24.015996  573367 retry.go:31] will retry after 1.077192285s: waiting for domain to come up
	I1101 10:10:25.095377  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:25.096265  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:25.096294  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:25.096701  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:25.096755  573367 retry.go:31] will retry after 1.602623159s: waiting for domain to come up
	I1101 10:10:24.045200  573081 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155066705s)
	I1101 10:10:24.045247  573081 crio.go:469] duration metric: took 2.155227341s to extract the tarball
	I1101 10:10:24.045264  573081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:10:24.094703  573081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:24.154204  573081 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:10:24.154234  573081 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:10:24.154245  573081 kubeadm.go:935] updating node { 192.168.50.181 8443 v1.28.0 crio true true} ...
	I1101 10:10:24.154362  573081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-080837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:10:24.154452  573081 ssh_runner.go:195] Run: crio config
	I1101 10:10:24.210069  573081 cni.go:84] Creating CNI manager for ""
	I1101 10:10:24.210103  573081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:24.210127  573081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:10:24.210159  573081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-080837 NodeName:old-k8s-version-080837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:10:24.210365  573081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-080837"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:10:24.210439  573081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:10:24.227451  573081 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:10:24.227533  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:10:24.244860  573081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1101 10:10:24.272097  573081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:10:24.298197  573081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1101 10:10:24.325280  573081 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1101 10:10:24.331324  573081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:24.353149  573081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:24.511240  573081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:10:24.568507  573081 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837 for IP: 192.168.50.181
	I1101 10:10:24.568555  573081 certs.go:195] generating shared ca certs ...
	I1101 10:10:24.568576  573081 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.568785  573081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 10:10:24.568838  573081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 10:10:24.568854  573081 certs.go:257] generating profile certs ...
	I1101 10:10:24.568954  573081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key
	I1101 10:10:24.568978  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt with IP's: []
	I1101 10:10:24.765658  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt ...
	I1101 10:10:24.765707  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: {Name:mk1a1034120b579b2a4a577dbc9b992e11805d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.766022  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key ...
	I1101 10:10:24.766055  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key: {Name:mk3b6b73f944fe338084137f8ddbd9af97f63205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.766209  573081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c
	I1101 10:10:24.766233  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.181]
	I1101 10:10:25.108541  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c ...
	I1101 10:10:25.108580  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c: {Name:mk9a3367b59928237e364f06f8dda75749781e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.108823  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c ...
	I1101 10:10:25.108850  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c: {Name:mk9a0c74876819df4909d796871b4b43ad1893eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.109075  573081 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt
	I1101 10:10:25.109203  573081 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key
	I1101 10:10:25.109282  573081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key
	I1101 10:10:25.109302  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt with IP's: []
	I1101 10:10:25.467939  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt ...
	I1101 10:10:25.467975  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt: {Name:mkce620e5b8e626fe8a9fd6b7a8833c73ad2f572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.468210  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key ...
	I1101 10:10:25.468235  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key: {Name:mkbb6366934922a4c5c88d752402d6eea326e81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.468452  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem (1338 bytes)
	W1101 10:10:25.468492  573081 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515_empty.pem, impossibly tiny 0 bytes
	I1101 10:10:25.468503  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:10:25.468526  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:10:25.468548  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:10:25.468569  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 10:10:25.468605  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:25.469322  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:10:25.531164  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:10:25.591719  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:10:25.635978  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:10:25.675078  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:10:25.714022  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:10:25.756560  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:10:25.797181  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1101 10:10:25.836237  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem --> /usr/share/ca-certificates/534515.pem (1338 bytes)
	I1101 10:10:25.875149  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /usr/share/ca-certificates/5345152.pem (1708 bytes)
	I1101 10:10:25.916111  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:10:25.952045  573081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:10:25.980514  573081 ssh_runner.go:195] Run: openssl version
	I1101 10:10:25.989143  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534515.pem && ln -fs /usr/share/ca-certificates/534515.pem /etc/ssl/certs/534515.pem"
	I1101 10:10:26.009169  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.016861  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:07 /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.016959  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.027157  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534515.pem /etc/ssl/certs/51391683.0"
	I1101 10:10:26.045219  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5345152.pem && ln -fs /usr/share/ca-certificates/5345152.pem /etc/ssl/certs/5345152.pem"
	I1101 10:10:26.063937  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.070362  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:07 /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.070452  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.080787  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5345152.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:10:26.096987  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:10:26.113377  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.120404  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.120495  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.130384  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:10:26.146534  573081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:10:26.153837  573081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:10:26.153922  573081 kubeadm.go:401] StartCluster: {Name:old-k8s-version-080837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:10:26.154026  573081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:10:26.154129  573081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:10:26.210513  573081 cri.go:89] found id: ""
	I1101 10:10:26.210596  573081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:10:26.229849  573081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:10:26.250069  573081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:10:26.271499  573081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:10:26.271524  573081 kubeadm.go:158] found existing configuration files:
	
	I1101 10:10:26.271581  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:10:26.289516  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:10:26.289609  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:10:26.312386  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:10:26.327559  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:10:26.327648  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:10:26.342307  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:10:26.354630  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:10:26.354710  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:10:26.367887  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:10:26.382219  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:10:26.382302  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:10:26.399466  573081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 10:10:26.495705  573081 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:10:26.495796  573081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:10:26.667752  573081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:10:26.667920  573081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:10:26.668093  573081 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:10:26.948089  573081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:10:26.950201  573081 out.go:252]   - Generating certificates and keys ...
	I1101 10:10:26.950378  573081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:10:26.950490  573081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:10:27.367709  573081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:10:23.760891  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:23.761852  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:24.260369  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.184654  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:10:27.184691  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:10:27.184711  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.233641  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:10:27.233678  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:10:27.260961  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.293566  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:27.293612  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:27.761149  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.766516  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:27.766566  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:28.261096  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:28.267778  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:28.267811  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:28.760569  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:28.766928  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 200:
	ok
	I1101 10:10:28.784336  572974 api_server.go:141] control plane version: v1.34.1
	I1101 10:10:28.784384  572974 api_server.go:131] duration metric: took 25.524192662s to wait for apiserver health ...
	I1101 10:10:28.784399  572974 cni.go:84] Creating CNI manager for ""
	I1101 10:10:28.784409  572974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:28.786288  572974 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 10:10:28.787778  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:10:28.807668  572974 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:10:28.858743  572974 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:10:28.872269  572974 system_pods.go:59] 6 kube-system pods found
	I1101 10:10:28.872336  572974 system_pods.go:61] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:10:28.872350  572974 system_pods.go:61] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:28.872364  572974 system_pods.go:61] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:28.872405  572974 system_pods.go:61] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:28.872415  572974 system_pods.go:61] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:28.872425  572974 system_pods.go:61] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:28.872436  572974 system_pods.go:74] duration metric: took 13.65921ms to wait for pod list to return data ...
	I1101 10:10:28.872450  572974 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:10:28.883369  572974 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:10:28.883494  572974 node_conditions.go:123] node cpu capacity is 2
	I1101 10:10:28.883529  572974 node_conditions.go:105] duration metric: took 11.072649ms to run NodePressure ...
	I1101 10:10:28.883631  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:10:29.259407  572974 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 10:10:29.263351  572974 kubeadm.go:744] kubelet initialised
	I1101 10:10:29.263384  572974 kubeadm.go:745] duration metric: took 3.941399ms waiting for restarted kubelet to initialise ...
	I1101 10:10:29.263418  572974 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:10:29.282179  572974 ops.go:34] apiserver oom_adj: -16
	I1101 10:10:29.282228  572974 kubeadm.go:602] duration metric: took 28.922007333s to restartPrimaryControlPlane
	I1101 10:10:29.282244  572974 kubeadm.go:403] duration metric: took 29.050771335s to StartCluster
	I1101 10:10:29.282281  572974 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:29.282421  572974 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:10:29.284002  572974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:29.284358  572974 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:10:29.284502  572974 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:10:29.284686  572974 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:29.286098  572974 out.go:179] * Enabled addons: 
	I1101 10:10:29.286111  572974 out.go:179] * Verifying Kubernetes components...
	I1101 10:10:27.823498  573081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:10:27.942253  573081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:10:28.042066  573081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:10:28.412468  573081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:10:28.412674  573081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-080837] and IPs [192.168.50.181 127.0.0.1 ::1]
	I1101 10:10:28.495829  573081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:10:28.496124  573081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-080837] and IPs [192.168.50.181 127.0.0.1 ::1]
	I1101 10:10:28.609561  573081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:10:28.805701  573081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:10:29.134014  573081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:10:29.134263  573081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:10:29.234558  573081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:10:29.520930  573081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:10:29.616114  573081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:10:29.835978  573081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:10:29.836490  573081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:10:29.841581  573081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:10:26.700879  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:26.701728  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:26.701757  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:26.702177  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:26.702227  573367 retry.go:31] will retry after 1.917803915s: waiting for domain to come up
	I1101 10:10:28.622456  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:28.623337  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:28.623366  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:28.623786  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:28.623838  573367 retry.go:31] will retry after 2.32656352s: waiting for domain to come up
	I1101 10:10:30.953060  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:30.953692  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:30.953713  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:30.954073  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:30.954121  573367 retry.go:31] will retry after 2.612957344s: waiting for domain to come up
	I1101 10:10:29.843646  573081 out.go:252]   - Booting up control plane ...
	I1101 10:10:29.843796  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:10:29.843973  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:10:29.844080  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:10:29.866035  573081 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:10:29.866869  573081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:10:29.867012  573081 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:10:30.089773  573081 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:10:29.287328  572974 addons.go:515] duration metric: took 2.845537ms for enable addons: enabled=[]
	I1101 10:10:29.287414  572974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:29.611155  572974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:10:29.643411  572974 node_ready.go:35] waiting up to 6m0s for node "pause-533709" to be "Ready" ...
	I1101 10:10:29.647785  572974 node_ready.go:49] node "pause-533709" is "Ready"
	I1101 10:10:29.647829  572974 node_ready.go:38] duration metric: took 4.345239ms for node "pause-533709" to be "Ready" ...
	I1101 10:10:29.647849  572974 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:10:29.647930  572974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:10:29.683944  572974 api_server.go:72] duration metric: took 399.5333ms to wait for apiserver process to appear ...
	I1101 10:10:29.683985  572974 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:10:29.684019  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:29.691227  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 200:
	ok
	I1101 10:10:29.692710  572974 api_server.go:141] control plane version: v1.34.1
	I1101 10:10:29.692751  572974 api_server.go:131] duration metric: took 8.755244ms to wait for apiserver health ...
	I1101 10:10:29.692766  572974 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:10:29.697073  572974 system_pods.go:59] 6 kube-system pods found
	I1101 10:10:29.697107  572974 system_pods.go:61] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running
	I1101 10:10:29.697124  572974 system_pods.go:61] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:29.697135  572974 system_pods.go:61] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:29.697152  572974 system_pods.go:61] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:29.697164  572974 system_pods.go:61] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:29.697177  572974 system_pods.go:61] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:29.697192  572974 system_pods.go:74] duration metric: took 4.416487ms to wait for pod list to return data ...
	I1101 10:10:29.697208  572974 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:10:29.700913  572974 default_sa.go:45] found service account: "default"
	I1101 10:10:29.700941  572974 default_sa.go:55] duration metric: took 3.720898ms for default service account to be created ...
	I1101 10:10:29.700955  572974 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:10:29.705208  572974 system_pods.go:86] 6 kube-system pods found
	I1101 10:10:29.705245  572974 system_pods.go:89] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running
	I1101 10:10:29.705261  572974 system_pods.go:89] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:29.705271  572974 system_pods.go:89] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:29.705281  572974 system_pods.go:89] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:29.705288  572974 system_pods.go:89] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:29.705298  572974 system_pods.go:89] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:29.705314  572974 system_pods.go:126] duration metric: took 4.352365ms to wait for k8s-apps to be running ...
	I1101 10:10:29.705323  572974 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:10:29.705393  572974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:10:29.729774  572974 system_svc.go:56] duration metric: took 24.435914ms WaitForService to wait for kubelet
	I1101 10:10:29.729821  572974 kubeadm.go:587] duration metric: took 445.416986ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:10:29.729846  572974 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:10:29.734041  572974 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:10:29.734080  572974 node_conditions.go:123] node cpu capacity is 2
	I1101 10:10:29.734098  572974 node_conditions.go:105] duration metric: took 4.242863ms to run NodePressure ...
	I1101 10:10:29.734116  572974 start.go:242] waiting for startup goroutines ...
	I1101 10:10:29.734127  572974 start.go:247] waiting for cluster config update ...
	I1101 10:10:29.734137  572974 start.go:256] writing updated cluster config ...
	I1101 10:10:29.734601  572974 ssh_runner.go:195] Run: rm -f paused
	I1101 10:10:29.741162  572974 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:10:29.742097  572974 kapi.go:59] client config for pause-533709: &rest.Config{Host:"https://192.168.61.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:10:29.746510  572974 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzwdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:29.754491  572974 pod_ready.go:94] pod "coredns-66bc5c9577-pzwdg" is "Ready"
	I1101 10:10:29.754524  572974 pod_ready.go:86] duration metric: took 7.982196ms for pod "coredns-66bc5c9577-pzwdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:29.758226  572974 pod_ready.go:83] waiting for pod "etcd-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:10:31.766356  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:33.570088  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:33.570757  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:33.570784  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:33.571144  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:33.571204  573367 retry.go:31] will retry after 3.631652192s: waiting for domain to come up
	I1101 10:10:36.590043  573081 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.503586 seconds
	I1101 10:10:36.590152  573081 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:10:36.606263  573081 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:10:37.151664  573081 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:10:37.151963  573081 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-080837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:10:37.667862  573081 kubeadm.go:319] [bootstrap-token] Using token: c3xn7r.p0qb0h17147juwlw
	I1101 10:10:37.670015  573081 out.go:252]   - Configuring RBAC rules ...
	I1101 10:10:37.670144  573081 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:10:37.679290  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:10:37.690353  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:10:37.693827  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:10:37.697268  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:10:37.703428  573081 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:10:37.731482  573081 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:10:38.066404  573081 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:10:38.127542  573081 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:10:38.131546  573081 kubeadm.go:319] 
	I1101 10:10:38.131642  573081 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:10:38.131653  573081 kubeadm.go:319] 
	I1101 10:10:38.131755  573081 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:10:38.131776  573081 kubeadm.go:319] 
	I1101 10:10:38.131800  573081 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:10:38.131856  573081 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:10:38.131915  573081 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:10:38.131922  573081 kubeadm.go:319] 
	I1101 10:10:38.131982  573081 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:10:38.132004  573081 kubeadm.go:319] 
	I1101 10:10:38.132092  573081 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:10:38.132104  573081 kubeadm.go:319] 
	I1101 10:10:38.132181  573081 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:10:38.132298  573081 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:10:38.132394  573081 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:10:38.132404  573081 kubeadm.go:319] 
	I1101 10:10:38.132547  573081 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:10:38.132657  573081 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:10:38.132670  573081 kubeadm.go:319] 
	I1101 10:10:38.132804  573081 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c3xn7r.p0qb0h17147juwlw \
	I1101 10:10:38.132973  573081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 10:10:38.133016  573081 kubeadm.go:319] 	--control-plane 
	I1101 10:10:38.133028  573081 kubeadm.go:319] 
	I1101 10:10:38.133141  573081 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:10:38.133164  573081 kubeadm.go:319] 
	I1101 10:10:38.133298  573081 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c3xn7r.p0qb0h17147juwlw \
	I1101 10:10:38.133482  573081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 10:10:38.142224  573081 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:10:38.142255  573081 cni.go:84] Creating CNI manager for ""
	I1101 10:10:38.142266  573081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:38.143968  573081 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1101 10:10:34.264295  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:36.267210  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:38.267588  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:37.204707  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.205563  573367 main.go:143] libmachine: domain embed-certs-468183 has current primary IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.205582  573367 main.go:143] libmachine: found domain IP: 192.168.83.42
	I1101 10:10:37.205589  573367 main.go:143] libmachine: reserving static IP address...
	I1101 10:10:37.206043  573367 main.go:143] libmachine: unable to find host DHCP lease matching {name: "embed-certs-468183", mac: "52:54:00:78:7b:11", ip: "192.168.83.42"} in network mk-embed-certs-468183
	I1101 10:10:37.449248  573367 main.go:143] libmachine: reserved static IP address 192.168.83.42 for domain embed-certs-468183
	I1101 10:10:37.449272  573367 main.go:143] libmachine: waiting for SSH...
	I1101 10:10:37.449278  573367 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:10:37.452780  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.453341  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.453380  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.453815  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.454096  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.454110  573367 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:10:37.575365  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:37.575741  573367 main.go:143] libmachine: domain creation complete
	I1101 10:10:37.577584  573367 machine.go:94] provisionDockerMachine start ...
	I1101 10:10:37.580567  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.581068  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.581093  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.581266  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.581460  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.581477  573367 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:10:37.702233  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:10:37.702283  573367 buildroot.go:166] provisioning hostname "embed-certs-468183"
	I1101 10:10:37.706516  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.707122  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.707158  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.707469  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.707716  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.707734  573367 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-468183 && echo "embed-certs-468183" | sudo tee /etc/hostname
	I1101 10:10:37.859466  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468183
	
	I1101 10:10:37.863437  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.863984  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.864034  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.864294  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.864613  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.864646  573367 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:10:37.999317  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:37.999359  573367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:10:37.999436  573367 buildroot.go:174] setting up certificates
	I1101 10:10:37.999453  573367 provision.go:84] configureAuth start
	I1101 10:10:38.003211  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.003771  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.003812  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.007093  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.007729  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.007778  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.008010  573367 provision.go:143] copyHostCerts
	I1101 10:10:38.008083  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:10:38.008114  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:10:38.008207  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:10:38.008340  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:10:38.008353  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:10:38.008401  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:10:38.008498  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:10:38.008508  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:10:38.009134  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:10:38.009265  573367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468183 san=[127.0.0.1 192.168.83.42 embed-certs-468183 localhost minikube]
	I1101 10:10:38.219217  573367 provision.go:177] copyRemoteCerts
	I1101 10:10:38.219323  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:10:38.223171  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.223663  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.223700  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.223877  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:38.333177  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:10:38.371609  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:10:38.405296  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:10:38.443379  573367 provision.go:87] duration metric: took 443.904241ms to configureAuth
	I1101 10:10:38.443419  573367 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:10:38.443645  573367 config.go:182] Loaded profile config "embed-certs-468183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:38.447079  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.447550  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.447586  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.447817  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:38.448133  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:38.448160  573367 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:10:38.726679  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:10:38.726726  573367 machine.go:97] duration metric: took 1.149121484s to provisionDockerMachine
	I1101 10:10:38.726742  573367 client.go:176] duration metric: took 20.737379235s to LocalClient.Create
	I1101 10:10:38.726772  573367 start.go:167] duration metric: took 20.737458211s to libmachine.API.Create "embed-certs-468183"
	I1101 10:10:38.726783  573367 start.go:293] postStartSetup for "embed-certs-468183" (driver="kvm2")
	I1101 10:10:38.726797  573367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:10:38.726886  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:10:38.730826  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.731390  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.731430  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.731618  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:38.825200  573367 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:10:38.831147  573367 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:10:38.831175  573367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:10:38.831253  573367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:10:38.831345  573367 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:10:38.831488  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:10:38.846545  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:38.879622  573367 start.go:296] duration metric: took 152.765264ms for postStartSetup
	I1101 10:10:38.883210  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.883643  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.883683  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.884040  573367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json ...
	I1101 10:10:38.884336  573367 start.go:128] duration metric: took 20.898036527s to createHost
	I1101 10:10:38.887283  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.887648  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.887678  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.887877  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:38.888155  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:38.888172  573367 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:10:39.010210  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991838.963449647
	
	I1101 10:10:39.010239  573367 fix.go:216] guest clock: 1761991838.963449647
	I1101 10:10:39.010247  573367 fix.go:229] Guest: 2025-11-01 10:10:38.963449647 +0000 UTC Remote: 2025-11-01 10:10:38.884356723 +0000 UTC m=+27.582695542 (delta=79.092924ms)
	I1101 10:10:39.010269  573367 fix.go:200] guest clock delta is within tolerance: 79.092924ms
	I1101 10:10:39.010275  573367 start.go:83] releasing machines lock for "embed-certs-468183", held for 21.024175042s
	I1101 10:10:39.013132  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.013543  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.013563  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.014104  573367 ssh_runner.go:195] Run: cat /version.json
	I1101 10:10:39.014199  573367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:10:39.017789  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.017986  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018319  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.018353  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018438  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.018502  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018551  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:39.018789  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:39.130563  573367 ssh_runner.go:195] Run: systemctl --version
	I1101 10:10:39.137802  573367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:10:39.306713  573367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:10:39.315528  573367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:10:39.315610  573367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:10:39.338455  573367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:10:39.338487  573367 start.go:496] detecting cgroup driver to use...
	I1101 10:10:39.338581  573367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:10:39.370628  573367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:10:39.390305  573367 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:10:39.390369  573367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:10:39.409945  573367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:10:39.428368  573367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:10:39.615872  573367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:10:39.841366  573367 docker.go:234] disabling docker service ...
	I1101 10:10:39.841455  573367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:10:39.858136  573367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:10:39.874425  573367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:10:40.037612  573367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:10:40.191403  573367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:10:40.208814  573367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:10:40.234701  573367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:10:40.234767  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.248073  573367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:10:40.248155  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.261116  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.275410  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.288371  573367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:10:40.302378  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.316102  573367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.339004  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.353182  573367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:10:40.364252  573367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:10:40.364322  573367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:10:40.397688  573367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:10:40.425139  573367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:40.598185  573367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:10:40.731129  573367 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:10:40.731216  573367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:10:40.739606  573367 start.go:564] Will wait 60s for crictl version
	I1101 10:10:40.739699  573367 ssh_runner.go:195] Run: which crictl
	I1101 10:10:40.745050  573367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:10:40.797153  573367 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:10:40.797241  573367 ssh_runner.go:195] Run: crio --version
	I1101 10:10:40.832657  573367 ssh_runner.go:195] Run: crio --version
	I1101 10:10:40.867706  573367 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 10:10:40.871830  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:40.872400  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:40.872428  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:40.872615  573367 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 10:10:40.877856  573367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:40.894755  573367 kubeadm.go:884] updating cluster {Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:10:40.894873  573367 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:10:40.894947  573367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:40.935225  573367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:10:40.935318  573367 ssh_runner.go:195] Run: which lz4
	I1101 10:10:40.940417  573367 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:10:40.946039  573367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:10:40.946075  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 10:10:38.145009  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:10:38.178201  573081 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:10:38.250010  573081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:10:38.250072  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:38.250078  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-080837 minikube.k8s.io/updated_at=2025_11_01T10_10_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=old-k8s-version-080837 minikube.k8s.io/primary=true
	I1101 10:10:38.639939  573081 ops.go:34] apiserver oom_adj: -16
	I1101 10:10:38.639975  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:39.140114  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:39.641026  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:40.140659  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:40.640388  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:41.140872  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:41.640091  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:42.140089  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:42.640064  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 10:10:40.765670  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:42.765732  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:43.266700  572974 pod_ready.go:94] pod "etcd-pause-533709" is "Ready"
	I1101 10:10:43.266752  572974 pod_ready.go:86] duration metric: took 13.508494921s for pod "etcd-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.269375  572974 pod_ready.go:83] waiting for pod "kube-apiserver-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.275936  572974 pod_ready.go:94] pod "kube-apiserver-pause-533709" is "Ready"
	I1101 10:10:43.275975  572974 pod_ready.go:86] duration metric: took 6.561549ms for pod "kube-apiserver-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.278319  572974 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.283272  572974 pod_ready.go:94] pod "kube-controller-manager-pause-533709" is "Ready"
	I1101 10:10:43.283308  572974 pod_ready.go:86] duration metric: took 4.963094ms for pod "kube-controller-manager-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.287746  572974 pod_ready.go:83] waiting for pod "kube-proxy-mkdfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.462766  572974 pod_ready.go:94] pod "kube-proxy-mkdfj" is "Ready"
	I1101 10:10:43.462802  572974 pod_ready.go:86] duration metric: took 175.021381ms for pod "kube-proxy-mkdfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.663406  572974 pod_ready.go:83] waiting for pod "kube-scheduler-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:44.062440  572974 pod_ready.go:94] pod "kube-scheduler-pause-533709" is "Ready"
	I1101 10:10:44.062481  572974 pod_ready.go:86] duration metric: took 399.032223ms for pod "kube-scheduler-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:44.062497  572974 pod_ready.go:40] duration metric: took 14.321295721s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:10:44.116861  572974 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:10:44.119113  572974 out.go:179] * Done! kubectl is now configured to use "pause-533709" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.923932974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=924bb38b-4dba-43e1-b0d5-3f9cb6debdf1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.984932092Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e7e458a-7350-4688-ae48-1220190cdfc5 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.985020709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e7e458a-7350-4688-ae48-1220190cdfc5 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.986467756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=342a56e0-701c-4d4d-94bc-0a8516a0af26 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.986873981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991844986853602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=342a56e0-701c-4d4d-94bc-0a8516a0af26 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.987437174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5c394f5-8a3c-4d08-b8dd-c84a08ab365a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.987507502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5c394f5-8a3c-4d08-b8dd-c84a08ab365a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:44 pause-533709 crio[3025]: time="2025-11-01 10:10:44.987774222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5c394f5-8a3c-4d08-b8dd-c84a08ab365a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.040765098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c887b204-12c0-4d09-bdfa-6dbfa48960aa name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.040833125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c887b204-12c0-4d09-bdfa-6dbfa48960aa name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.042800849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be68a687-4e86-47fc-9174-3fe0924b2995 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.043277026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991845043252230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be68a687-4e86-47fc-9174-3fe0924b2995 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.043928740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14986569-0e34-4d1a-a89c-7bcef9c07865 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.044152258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14986569-0e34-4d1a-a89c-7bcef9c07865 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.044875790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14986569-0e34-4d1a-a89c-7bcef9c07865 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.095866836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e841a51-7c44-484f-a99b-95afcb329142 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.096136513Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e841a51-7c44-484f-a99b-95afcb329142 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.099315325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc668044-4829-46ac-adf9-2a8b2c7f6806 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.101328960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991845101292441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc668044-4829-46ac-adf9-2a8b2c7f6806 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.102872546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f41a5257-aaac-46c2-aa5d-6db403a182e8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.103417875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f41a5257-aaac-46c2-aa5d-6db403a182e8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.103725574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f41a5257-aaac-46c2-aa5d-6db403a182e8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.131767167Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79bf86da-58d1-419e-9f4e-1c47bb3fa693 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.132004540Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-pzwdg,Uid:6b3dc10c-d5ad-40f9-a28b-c4a89479f817,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991828398811301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:28.076627666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-533709,Uid:3c9f43c330cbd2ee80a698ee9579baec,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991823040807216,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3c9f43c330cbd2ee80a698ee9579baec,kubernetes.io/config.seen: 2025-11-01T10:10:03.075472038Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-533709,Uid:991b90746afec243940c42caa25f71de,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991822978942695,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec
243940c42caa25f71de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 991b90746afec243940c42caa25f71de,kubernetes.io/config.seen: 2025-11-01T10:10:03.075470485Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee4cd9b27fade8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&PodSandboxMetadata{Name:kube-proxy-mkdfj,Uid:1c0c82af-9116-41ce-9b01-bb2802550969,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799645911332,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:09:11.363833927Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-533709,Uid:e724985d54b20b982f6f22f4e5940b63,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799640717060,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.122:2379,kubernetes.io/config.hash: e724985d54b20b982f6f22f4e5940b63,kubernetes.io/config.seen: 2025-11-01T10:09:06.061345382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-533709,Uid:63f0943a93b3ceab023a59b1a3fb2aeb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799627802220,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.122:8443,kubernetes.io/config.hash: 63f0943a93b3ceab023a59b1a3fb2aeb,kubernetes.io/config.seen: 2025-11-01T10:09:06.061348370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=79bf86da-58d1-419e-9f4e-1c47bb3fa693 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:10:45 pause-533709 crio[3025]: time="2025-11-01 10:10:45.133282100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef6cb355-28be-41af-9538-de702669632b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	498354aef4312       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago       Running             coredns                   1                   448d1985ab767       coredns-66bc5c9577-pzwdg
	b8ffdea27f223       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago       Running             kube-apiserver            2                   a0e82a8a822cb       kube-apiserver-pause-533709
	e204777a5b47a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago       Running             kube-scheduler            1                   83d5711c53573       kube-scheduler-pause-533709
	d769cb43b90bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago       Running             kube-controller-manager   1                   0140522956c8a       kube-controller-manager-pause-533709
	98ff14e18b089       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      2                   b6692431ababa       etcd-pause-533709
	429c6ef4a6c57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   44 seconds ago       Running             kube-proxy                1                   ee4cd9b27fade       kube-proxy-mkdfj
	1c7105be5e4c2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   44 seconds ago       Exited              kube-apiserver            1                   a0e82a8a822cb       kube-apiserver-pause-533709
	b993b8fbb2d6a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   45 seconds ago       Exited              etcd                      1                   b6692431ababa       etcd-pause-533709
	aa8db6bc66adc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   bae233ae55626       coredns-66bc5c9577-pzwdg
	8bbaa009a8d7c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   6d8c483d789c5       kube-proxy-mkdfj
	e362762826b71       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   6af75b2f4b1cd       kube-scheduler-pause-533709
	877956ec3f06e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Exited              kube-controller-manager   0                   5579fe9613090       kube-controller-manager-pause-533709
	
	
	==> coredns [498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53740 - 29776 "HINFO IN 8738413579764429072.474708066066726926. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.04444825s
	
	
	==> coredns [aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-533709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-533709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-533709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_09_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-533709
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:10:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.122
	  Hostname:    pause-533709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3eb95cfe69ba40d3929f224e1e009616
	  System UUID:                3eb95cfe-69ba-40d3-929f-224e1e009616
	  Boot ID:                    eeb63cf3-e129-4c77-9267-60c4a9a96166
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pzwdg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     94s
	  kube-system                 etcd-pause-533709                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-533709             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-533709    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-mkdfj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-pause-533709             100m (5%)     0 (0%)      0 (0%)           0 (0%)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 92s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     99s                kubelet          Node pause-533709 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-533709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node pause-533709 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeReady                98s                kubelet          Node pause-533709 status is now: NodeReady
	  Normal  RegisteredNode           95s                node-controller  Node pause-533709 event: Registered Node pause-533709 in Controller
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node pause-533709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node pause-533709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)  kubelet          Node pause-533709 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-533709 event: Registered Node pause-533709 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:08] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000029] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008164] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.177735] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.093715] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113484] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.095666] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 1 10:09] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.052905] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.260167] kauditd_printk_skb: 213 callbacks suppressed
	[ +23.031092] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 10:10] kauditd_printk_skb: 326 callbacks suppressed
	[ +19.530644] kauditd_printk_skb: 17 callbacks suppressed
	[  +4.537286] kauditd_printk_skb: 81 callbacks suppressed
	
	
	==> etcd [98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3] <==
	{"level":"warn","ts":"2025-11-01T10:10:25.799467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.824788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.854264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.881756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.905416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.925762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.946417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.978758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.993039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.037507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.049044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.067014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.084904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.100583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.127807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.148187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.163042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.181392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.199335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.218031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.231780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.269912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.287133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.313253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.490697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	
	
	==> etcd [b993b8fbb2d6a2d30b60ce04571b393da5a12345208c74d4d9c42e72514262a7] <==
	{"level":"info","ts":"2025-11-01T10:10:00.119705Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","recovered-remote-peer-id":"4bca7de7de23e3d4","recovered-remote-peer-urls":["https://192.168.61.122:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T10:10:00.119718Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.119727Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-01T10:10:00.119758Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-01T10:10:00.119820Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"4bca7de7de23e3d4 switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-01T10:10:00.119866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"4bca7de7de23e3d4 became follower at term 2"}
	{"level":"info","ts":"2025-11-01T10:10:00.119874Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 4bca7de7de23e3d4 [peers: [], term: 2, commit: 426, applied: 0, lastindex: 426, lastterm: 2]"}
	{"level":"warn","ts":"2025-11-01T10:10:00.123703Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-01T10:10:00.130248Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":407}
	{"level":"info","ts":"2025-11-01T10:10:00.137861Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-01T10:10:00.138732Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"4bca7de7de23e3d4","timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:10:00.139248Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"4bca7de7de23e3d4"}
	{"level":"info","ts":"2025-11-01T10:10:00.139335Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"4bca7de7de23e3d4","local-server-version":"3.6.4","cluster-id":"8cd2c58e2f5d4822","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.139767Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"4bca7de7de23e3d4 switched to configuration voters=(5461315932957959124)"}
	{"level":"info","ts":"2025-11-01T10:10:00.139880Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:10:00.139901Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","added-peer-id":"4bca7de7de23e3d4","added-peer-peer-urls":["https://192.168.61.122:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T10:10:00.140121Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.140583Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"4bca7de7de23e3d4","initial-advertise-peer-urls":["https://192.168.61.122:2380"],"listen-peer-urls":["https://192.168.61.122:2380"],"advertise-client-urls":["https://192.168.61.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:10:00.140670Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:10:00.140795Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"4bca7de7de23e3d4","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T10:10:00.140978Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.141017Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.141032Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.143472Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.61.122:2380"}
	{"level":"info","ts":"2025-11-01T10:10:00.143993Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.61.122:2380"}
	
	
	==> kernel <==
	 10:10:45 up 2 min,  0 users,  load average: 1.28, 0.54, 0.20
	Linux pause-533709 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef] <==
	W1101 10:10:01.743515       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:10:01.744193       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1101 10:10:01.745587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:10:01.770932       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:10:01.754891       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:10:01.781198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:10:01.781797       1 instance.go:239] Using reconciler: lease
	W1101 10:10:01.783746       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:01.784535       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.743904       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.744878       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.785715       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.089978       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.228884       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.385871       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:06.780167       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:06.942545       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:07.126648       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.506462       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.793716       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.969190       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:16.488761       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:16.584396       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:18.570186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1101 10:10:21.784380       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019] <==
	I1101 10:10:27.267443       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:10:27.268285       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:10:27.268315       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:10:27.268321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:10:27.268327       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:10:27.269386       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:10:27.269479       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:10:27.314151       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:10:27.314249       1 policy_source.go:240] refreshing policies
	I1101 10:10:27.329001       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:10:27.329996       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:10:27.330148       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:10:27.334261       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:10:27.335963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:10:27.337353       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:10:27.345140       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:10:27.353413       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:10:28.136859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:10:28.140416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:10:29.113280       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:10:29.172340       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:10:29.220792       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:10:29.232530       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:10:30.854541       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:10:30.905167       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458] <==
	I1101 10:09:10.321122       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:09:10.321489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:09:10.321503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:09:10.322388       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:09:10.324228       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:09:10.324938       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:09:10.325053       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:09:10.326679       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:09:10.326799       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:09:10.326923       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:09:10.327014       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:09:10.327022       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:09:10.328003       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:09:10.332468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:09:10.340672       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:09:10.358741       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-533709" podCIDRs=["10.244.0.0/24"]
	I1101 10:09:10.362498       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:09:10.362670       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:09:10.362706       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:09:10.368706       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:09:10.369959       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:09:10.374896       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:09:10.381918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:09:10.388161       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:09:10.388284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304] <==
	I1101 10:10:30.609541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:10:30.609754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:30.613171       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:10:30.614043       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:10:30.615382       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:10:30.622129       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:10:30.630816       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:10:30.633336       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:10:30.633381       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:10:30.636544       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:10:30.642837       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:10:30.646271       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:10:30.648779       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:10:30.650915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:30.651521       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:10:30.651552       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:10:30.652484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:30.652495       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:10:30.652500       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:10:30.652568       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:10:30.654158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:10:30.659877       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:10:30.661893       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:10:30.667566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:30.670147       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba] <==
	E1101 10:10:22.795457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-533709&limit=500&resourceVersion=0\": dial tcp 192.168.61.122:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.122:53430->192.168.61.122:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 10:10:27.342226       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:10:27.342326       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.122"]
	E1101 10:10:27.342470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:10:27.401356       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:10:27.401428       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:10:27.401459       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:10:27.415998       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:10:27.418894       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:10:27.419490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:27.436277       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:10:27.436498       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:10:27.436664       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:10:27.436689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:10:27.440106       1 config.go:200] "Starting service config controller"
	I1101 10:10:27.440777       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:10:27.440662       1 config.go:309] "Starting node config controller"
	I1101 10:10:27.441009       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:10:27.441036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:10:27.536824       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:10:27.536890       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:10:27.541223       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e] <==
	I1101 10:09:13.085888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:09:13.187960       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:09:13.188011       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.122"]
	E1101 10:09:13.188167       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:09:13.397807       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:09:13.397959       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:09:13.398036       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:09:13.415213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:09:13.416168       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:09:13.416220       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:09:13.423300       1 config.go:200] "Starting service config controller"
	I1101 10:09:13.423455       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:09:13.423560       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:09:13.423579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:09:13.423677       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:09:13.423695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:09:13.425175       1 config.go:309] "Starting node config controller"
	I1101 10:09:13.425333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:09:13.425364       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:09:13.524490       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:09:13.524528       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:09:13.524591       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096] <==
	I1101 10:10:25.066799       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:10:27.301268       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:10:27.301571       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:27.308661       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:10:27.308823       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:10:27.308866       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:10:27.308901       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:10:27.310149       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:10:27.310179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:10:27.310193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.310198       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.410305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.410719       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:10:27.410736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc] <==
	E1101 10:09:03.352345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:09:03.352382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:09:03.352416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:09:03.352448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:09:03.352482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:09:03.354729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:09:03.354783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:09:04.199027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:09:04.334482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:09:04.564213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:09:04.587016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:09:04.626562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:09:04.632916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:09:04.704204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:09:04.748893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:09:04.754046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:09:04.754642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:09:04.778263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1101 10:09:06.529846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:09:50.056501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:09:50.056571       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:09:50.056614       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:09:50.056656       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:09:50.056838       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:09:50.056885       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.460843    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.461297    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.461700    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:26 pause-533709 kubelet[3593]: E1101 10:10:26.467548    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:26 pause-533709 kubelet[3593]: E1101 10:10:26.467853    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.183162    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.411904    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-533709\" already exists" pod="kube-system/etcd-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.412194    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415638    3593 kubelet_node_status.go:124] "Node was previously registered" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415781    3593 kubelet_node_status.go:78] "Successfully registered node" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415984    3593 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.417840    3593 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.433187    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-533709\" already exists" pod="kube-system/kube-apiserver-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.433784    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.450622    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-533709\" already exists" pod="kube-system/kube-controller-manager-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.450829    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.460577    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-533709\" already exists" pod="kube-system/kube-scheduler-pause-533709"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.071466    3593 apiserver.go:52] "Watching apiserver"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.082360    3593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.128949    3593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c0c82af-9116-41ce-9b01-bb2802550969-lib-modules\") pod \"kube-proxy-mkdfj\" (UID: \"1c0c82af-9116-41ce-9b01-bb2802550969\") " pod="kube-system/kube-proxy-mkdfj"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.130581    3593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c0c82af-9116-41ce-9b01-bb2802550969-xtables-lock\") pod \"kube-proxy-mkdfj\" (UID: \"1c0c82af-9116-41ce-9b01-bb2802550969\") " pod="kube-system/kube-proxy-mkdfj"
	Nov 01 10:10:33 pause-533709 kubelet[3593]: E1101 10:10:33.303033    3593 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991833302600944  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:33 pause-533709 kubelet[3593]: E1101 10:10:33.303153    3593 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991833302600944  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:43 pause-533709 kubelet[3593]: E1101 10:10:43.304776    3593 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991843304523438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:43 pause-533709 kubelet[3593]: E1101 10:10:43.304797    3593 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991843304523438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-533709 -n pause-533709
helpers_test.go:269: (dbg) Run:  kubectl --context pause-533709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-533709 -n pause-533709
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-533709 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-533709 logs -n 25: (2.807445326s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-242892 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo containerd config dump                                                                                                                                                                                                │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ ssh     │ -p cilium-242892 sudo crio config                                                                                                                                                                                                           │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │                     │
	│ delete  │ -p cilium-242892                                                                                                                                                                                                                            │ cilium-242892             │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │ 01 Nov 25 10:07 UTC │
	│ start   │ -p guest-930796 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-930796              │ jenkins │ v1.37.0 │ 01 Nov 25 10:07 UTC │ 01 Nov 25 10:08 UTC │
	│ ssh     │ -p NoKubernetes-336039 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-336039       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │                     │
	│ delete  │ -p force-systemd-env-940638                                                                                                                                                                                                                 │ force-systemd-env-940638  │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ delete  │ -p NoKubernetes-336039                                                                                                                                                                                                                      │ NoKubernetes-336039       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ start   │ -p pause-533709 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-533709              │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p cert-expiration-734989 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-734989    │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p force-systemd-flag-360782 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:09 UTC │
	│ delete  │ -p kubernetes-upgrade-353156                                                                                                                                                                                                                │ kubernetes-upgrade-353156 │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:08 UTC │
	│ start   │ -p cert-options-476227 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:08 UTC │ 01 Nov 25 10:10 UTC │
	│ start   │ -p pause-533709 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-533709              │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:10 UTC │
	│ ssh     │ force-systemd-flag-360782 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:09 UTC │
	│ delete  │ -p force-systemd-flag-360782                                                                                                                                                                                                                │ force-systemd-flag-360782 │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │ 01 Nov 25 10:09 UTC │
	│ start   │ -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-080837    │ jenkins │ v1.37.0 │ 01 Nov 25 10:09 UTC │                     │
	│ ssh     │ cert-options-476227 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ ssh     │ -p cert-options-476227 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ delete  │ -p cert-options-476227                                                                                                                                                                                                                      │ cert-options-476227       │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │ 01 Nov 25 10:10 UTC │
	│ start   │ -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-468183        │ jenkins │ v1.37.0 │ 01 Nov 25 10:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:10:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:10:11.364626  573367 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:10:11.365063  573367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:10:11.365081  573367 out.go:374] Setting ErrFile to fd 2...
	I1101 10:10:11.365088  573367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:10:11.365448  573367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 10:10:11.366192  573367 out.go:368] Setting JSON to false
	I1101 10:10:11.367576  573367 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67933,"bootTime":1761923878,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:10:11.367715  573367 start.go:143] virtualization: kvm guest
	I1101 10:10:11.369993  573367 out.go:179] * [embed-certs-468183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:10:11.371301  573367 notify.go:221] Checking for updates...
	I1101 10:10:11.371309  573367 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:10:11.372738  573367 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:10:11.374284  573367 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:10:11.375630  573367 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:11.377032  573367 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:10:11.378315  573367 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:10:11.380283  573367 config.go:182] Loaded profile config "cert-expiration-734989": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:11.380438  573367 config.go:182] Loaded profile config "guest-930796": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:10:11.380579  573367 config.go:182] Loaded profile config "old-k8s-version-080837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:10:11.380762  573367 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:11.380920  573367 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:10:11.420791  573367 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:10:11.421944  573367 start.go:309] selected driver: kvm2
	I1101 10:10:11.421961  573367 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:10:11.421977  573367 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:10:11.422790  573367 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:10:11.423119  573367 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:10:11.423167  573367 cni.go:84] Creating CNI manager for ""
	I1101 10:10:11.423226  573367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:11.423237  573367 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 10:10:11.423312  573367 start.go:353] cluster config:
	{Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:10:11.423431  573367 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:10:11.425029  573367 out.go:179] * Starting "embed-certs-468183" primary control-plane node in "embed-certs-468183" cluster
	I1101 10:10:09.764395  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:09.765161  573081 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-080837 (source=lease)
	I1101 10:10:09.765197  573081 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:09.765552  573081 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-080837 in network mk-old-k8s-version-080837 (interfaces detected: [])
	I1101 10:10:09.765595  573081 retry.go:31] will retry after 2.291513508s: waiting for domain to come up
	I1101 10:10:12.060133  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:12.060696  573081 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-080837 (source=lease)
	I1101 10:10:12.060713  573081 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:12.061152  573081 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-080837 in network mk-old-k8s-version-080837 (interfaces detected: [])
	I1101 10:10:12.061193  573081 retry.go:31] will retry after 4.280629345s: waiting for domain to come up
	I1101 10:10:13.268027  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:10:13.268103  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:11.426098  573367 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:10:11.426139  573367 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:10:11.426156  573367 cache.go:59] Caching tarball of preloaded images
	I1101 10:10:11.426253  573367 preload.go:233] Found /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:10:11.426268  573367 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:10:11.426394  573367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json ...
	I1101 10:10:11.426423  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json: {Name:mk0bcfbbdec7330a8609a2b3f9b6e2b8348c0444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:11.426600  573367 start.go:360] acquireMachinesLock for embed-certs-468183: {Name:mk0f0dee5270210132f861d1e08706cfde31b35b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:10:16.345353  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.346270  573081 main.go:143] libmachine: domain old-k8s-version-080837 has current primary IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.346297  573081 main.go:143] libmachine: found domain IP: 192.168.50.181
	I1101 10:10:16.346332  573081 main.go:143] libmachine: reserving static IP address...
	I1101 10:10:16.346926  573081 main.go:143] libmachine: unable to find host DHCP lease matching {name: "old-k8s-version-080837", mac: "52:54:00:ba:e1:24", ip: "192.168.50.181"} in network mk-old-k8s-version-080837
	I1101 10:10:16.586855  573081 main.go:143] libmachine: reserved static IP address 192.168.50.181 for domain old-k8s-version-080837
	I1101 10:10:16.586921  573081 main.go:143] libmachine: waiting for SSH...
	I1101 10:10:16.586930  573081 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:10:16.590543  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.591226  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.591290  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.591515  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.591759  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.591774  573081 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:10:16.706389  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:16.706793  573081 main.go:143] libmachine: domain creation complete
	I1101 10:10:16.708421  573081 machine.go:94] provisionDockerMachine start ...
	I1101 10:10:16.710805  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.711295  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.711329  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.711504  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.711702  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.711712  573081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:10:16.832430  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:10:16.832465  573081 buildroot.go:166] provisioning hostname "old-k8s-version-080837"
	I1101 10:10:16.836364  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.836845  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.836880  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.837117  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.837359  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.837376  573081 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-080837 && echo "old-k8s-version-080837" | sudo tee /etc/hostname
	I1101 10:10:16.963511  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-080837
	
	I1101 10:10:16.966816  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.967248  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:16.967280  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:16.967441  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:16.967636  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:16.967653  573081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-080837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-080837/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-080837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:10:17.086146  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:17.086182  573081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:10:17.086209  573081 buildroot.go:174] setting up certificates
	I1101 10:10:17.086224  573081 provision.go:84] configureAuth start
	I1101 10:10:17.089520  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.089953  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.089976  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092254  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092685  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.092707  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.092845  573081 provision.go:143] copyHostCerts
	I1101 10:10:17.092909  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:10:17.092928  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:10:17.093008  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:10:17.093127  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:10:17.093137  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:10:17.093175  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:10:17.093294  573081 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:10:17.093309  573081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:10:17.093345  573081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:10:17.093458  573081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-080837 san=[127.0.0.1 192.168.50.181 localhost minikube old-k8s-version-080837]
	I1101 10:10:17.269296  573081 provision.go:177] copyRemoteCerts
	I1101 10:10:17.269362  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:10:17.272052  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.272467  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.272501  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.272719  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.361036  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:10:17.396585  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 10:10:17.429454  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:10:17.459991  573081 provision.go:87] duration metric: took 373.747673ms to configureAuth
	I1101 10:10:17.460035  573081 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:10:17.460233  573081 config.go:182] Loaded profile config "old-k8s-version-080837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 10:10:17.463173  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.463626  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.463661  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.463835  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:17.464056  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:17.464070  573081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:10:17.986061  573367 start.go:364] duration metric: took 6.559419933s to acquireMachinesLock for "embed-certs-468183"
	I1101 10:10:17.986142  573367 start.go:93] Provisioning new machine with config: &{Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:10:17.986284  573367 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 10:10:18.272137  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 10:10:18.272182  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:17.725827  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:10:17.725862  573081 machine.go:97] duration metric: took 1.017420655s to provisionDockerMachine
	I1101 10:10:17.725876  573081 client.go:176] duration metric: took 21.394411039s to LocalClient.Create
	I1101 10:10:17.725921  573081 start.go:167] duration metric: took 21.394505597s to libmachine.API.Create "old-k8s-version-080837"
	I1101 10:10:17.725934  573081 start.go:293] postStartSetup for "old-k8s-version-080837" (driver="kvm2")
	I1101 10:10:17.725949  573081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:10:17.726033  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:10:17.729293  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.729794  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.729825  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.730074  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.816173  573081 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:10:17.821814  573081 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:10:17.821846  573081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:10:17.821973  573081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:10:17.822224  573081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:10:17.822367  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:10:17.835768  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:17.871800  573081 start.go:296] duration metric: took 145.83832ms for postStartSetup
	I1101 10:10:17.875584  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.876112  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.876138  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.876364  573081 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/config.json ...
	I1101 10:10:17.876549  573081 start.go:128] duration metric: took 21.616405598s to createHost
	I1101 10:10:17.879051  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.879398  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.879422  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.879611  573081 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:17.879845  573081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.181 22 <nil> <nil>}
	I1101 10:10:17.879857  573081 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:10:17.985857  573081 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991817.946271015
	
	I1101 10:10:17.985885  573081 fix.go:216] guest clock: 1761991817.946271015
	I1101 10:10:17.985917  573081 fix.go:229] Guest: 2025-11-01 10:10:17.946271015 +0000 UTC Remote: 2025-11-01 10:10:17.876561504 +0000 UTC m=+25.257838208 (delta=69.709511ms)
	I1101 10:10:17.985941  573081 fix.go:200] guest clock delta is within tolerance: 69.709511ms
	I1101 10:10:17.985948  573081 start.go:83] releasing machines lock for "old-k8s-version-080837", held for 21.725953833s
	I1101 10:10:17.989673  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.990083  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.990108  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.990843  573081 ssh_runner.go:195] Run: cat /version.json
	I1101 10:10:17.990946  573081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:10:17.994413  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.994811  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.994854  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.994879  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.995073  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:17.995431  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:17.995462  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:17.995642  573081 sshutil.go:53] new ssh client: &{IP:192.168.50.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/old-k8s-version-080837/id_rsa Username:docker}
	I1101 10:10:18.080624  573081 ssh_runner.go:195] Run: systemctl --version
	I1101 10:10:18.107012  573081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:10:18.281148  573081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:10:18.289622  573081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:10:18.289696  573081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:10:18.312384  573081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:10:18.312416  573081 start.go:496] detecting cgroup driver to use...
	I1101 10:10:18.312503  573081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:10:18.338921  573081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:10:18.357960  573081 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:10:18.358028  573081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:10:18.377290  573081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:10:18.395398  573081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:10:18.544803  573081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:10:18.753999  573081 docker.go:234] disabling docker service ...
	I1101 10:10:18.754082  573081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:10:18.771271  573081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:10:18.789543  573081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:10:18.961955  573081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:10:19.113131  573081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:10:19.130117  573081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:10:19.157381  573081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 10:10:19.157450  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.171518  573081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:10:19.171594  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.185163  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.198466  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.211935  573081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:10:19.225604  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.238311  573081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.263611  573081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:19.277406  573081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:10:19.290333  573081 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:10:19.290412  573081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:10:19.320599  573081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:10:19.338965  573081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:19.499507  573081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:10:19.634883  573081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:10:19.634993  573081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:10:19.643768  573081 start.go:564] Will wait 60s for crictl version
	I1101 10:10:19.643844  573081 ssh_runner.go:195] Run: which crictl
	I1101 10:10:19.650542  573081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:10:19.694886  573081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:10:19.695015  573081 ssh_runner.go:195] Run: crio --version
	I1101 10:10:19.727446  573081 ssh_runner.go:195] Run: crio --version
	I1101 10:10:19.762499  573081 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.29.1 ...
	I1101 10:10:17.989074  573367 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 10:10:17.989313  573367 start.go:159] libmachine.API.Create for "embed-certs-468183" (driver="kvm2")
	I1101 10:10:17.989355  573367 client.go:173] LocalClient.Create starting
	I1101 10:10:17.989463  573367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem
	I1101 10:10:17.989508  573367 main.go:143] libmachine: Decoding PEM data...
	I1101 10:10:17.989530  573367 main.go:143] libmachine: Parsing certificate...
	I1101 10:10:17.989629  573367 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem
	I1101 10:10:17.989660  573367 main.go:143] libmachine: Decoding PEM data...
	I1101 10:10:17.989677  573367 main.go:143] libmachine: Parsing certificate...
	I1101 10:10:17.990097  573367 main.go:143] libmachine: creating domain...
	I1101 10:10:17.990108  573367 main.go:143] libmachine: creating network...
	I1101 10:10:17.992092  573367 main.go:143] libmachine: found existing default network
	I1101 10:10:17.992356  573367 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:17.993735  573367 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c1:f3:d3} reservation:<nil>}
	I1101 10:10:17.994824  573367 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f3:08:19} reservation:<nil>}
	I1101 10:10:17.995662  573367 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:bf:46:19} reservation:<nil>}
	I1101 10:10:17.996437  573367 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:d7:d1} reservation:<nil>}
	I1101 10:10:17.997508  573367 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df4d60}
	I1101 10:10:17.997600  573367 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-embed-certs-468183</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:18.003466  573367 main.go:143] libmachine: creating private network mk-embed-certs-468183 192.168.83.0/24...
	I1101 10:10:18.086847  573367 main.go:143] libmachine: private network mk-embed-certs-468183 192.168.83.0/24 created
	I1101 10:10:18.087288  573367 main.go:143] libmachine: <network>
	  <name>mk-embed-certs-468183</name>
	  <uuid>2f64b0ed-277f-4c5b-a247-cd3f68bf3b08</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:c7:87:de'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:10:18.087324  573367 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 ...
	I1101 10:10:18.087346  573367 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 10:10:18.087358  573367 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:18.087431  573367 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21833-530629/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 10:10:18.353660  573367 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa...
	I1101 10:10:18.795276  573367 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk...
	I1101 10:10:18.795324  573367 main.go:143] libmachine: Writing magic tar header
	I1101 10:10:18.795343  573367 main.go:143] libmachine: Writing SSH key tar header
	I1101 10:10:18.795418  573367 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 ...
	I1101 10:10:18.795475  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183
	I1101 10:10:18.795498  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183 (perms=drwx------)
	I1101 10:10:18.795508  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube/machines
	I1101 10:10:18.795517  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube/machines (perms=drwxr-xr-x)
	I1101 10:10:18.795527  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:10:18.795544  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629/.minikube (perms=drwxr-xr-x)
	I1101 10:10:18.795557  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21833-530629
	I1101 10:10:18.795568  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21833-530629 (perms=drwxrwxr-x)
	I1101 10:10:18.795581  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 10:10:18.795591  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 10:10:18.795601  573367 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 10:10:18.795611  573367 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 10:10:18.795620  573367 main.go:143] libmachine: checking permissions on dir: /home
	I1101 10:10:18.795627  573367 main.go:143] libmachine: skipping /home - not owner
	I1101 10:10:18.795634  573367 main.go:143] libmachine: defining domain...
	I1101 10:10:18.797340  573367 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>embed-certs-468183</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-embed-certs-468183'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:10:18.802987  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:ff:1a:65 in network default
	I1101 10:10:18.803645  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:18.803665  573367 main.go:143] libmachine: starting domain...
	I1101 10:10:18.803670  573367 main.go:143] libmachine: ensuring networks are active...
	I1101 10:10:18.804515  573367 main.go:143] libmachine: Ensuring network default is active
	I1101 10:10:18.804971  573367 main.go:143] libmachine: Ensuring network mk-embed-certs-468183 is active
	I1101 10:10:18.805661  573367 main.go:143] libmachine: getting domain XML...
	I1101 10:10:18.806958  573367 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>embed-certs-468183</name>
	  <uuid>6ea47518-574e-4d5a-8064-0f7a13089d7d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/embed-certs-468183.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:78:7b:11'/>
	      <source network='mk-embed-certs-468183'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ff:1a:65'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:10:20.252853  573367 main.go:143] libmachine: waiting for domain to start...
	I1101 10:10:20.254781  573367 main.go:143] libmachine: domain is now running
	I1101 10:10:20.254804  573367 main.go:143] libmachine: waiting for IP...
	I1101 10:10:20.255969  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.256680  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.256698  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.257194  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.257263  573367 retry.go:31] will retry after 206.751059ms: waiting for domain to come up
	I1101 10:10:20.466253  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.467196  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.467220  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.467630  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.467671  573367 retry.go:31] will retry after 333.946985ms: waiting for domain to come up
	I1101 10:10:20.803276  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:20.804124  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:20.804142  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:20.804639  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:20.804686  573367 retry.go:31] will retry after 314.04737ms: waiting for domain to come up
	I1101 10:10:21.120435  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:21.121425  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:21.121446  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:21.121972  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:21.122021  573367 retry.go:31] will retry after 547.50417ms: waiting for domain to come up
	I1101 10:10:19.767366  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:19.767857  573081 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:e1:24", ip: ""} in network mk-old-k8s-version-080837: {Iface:virbr2 ExpiryTime:2025-11-01 11:10:14 +0000 UTC Type:0 Mac:52:54:00:ba:e1:24 Iaid: IPaddr:192.168.50.181 Prefix:24 Hostname:old-k8s-version-080837 Clientid:01:52:54:00:ba:e1:24}
	I1101 10:10:19.767886  573081 main.go:143] libmachine: domain old-k8s-version-080837 has defined IP address 192.168.50.181 and MAC address 52:54:00:ba:e1:24 in network mk-old-k8s-version-080837
	I1101 10:10:19.768089  573081 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 10:10:19.773448  573081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:19.791035  573081 kubeadm.go:884] updating cluster {Name:old-k8s-version-080837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:10:19.791220  573081 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:10:19.791293  573081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:19.835612  573081 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1101 10:10:19.835705  573081 ssh_runner.go:195] Run: which lz4
	I1101 10:10:19.840655  573081 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:10:19.846052  573081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:10:19.846078  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1101 10:10:21.889979  573081 crio.go:462] duration metric: took 2.04936871s to copy over tarball
	I1101 10:10:21.890092  573081 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:10:21.799791  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": read tcp 192.168.61.1:49794->192.168.61.122:8443: read: connection reset by peer
	I1101 10:10:21.799861  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:21.800492  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:22.261161  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:22.261999  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:22.760787  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:22.761717  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:23.261151  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:23.261928  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:21.671178  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:21.671975  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:21.672003  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:21.672489  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:21.672541  573367 retry.go:31] will retry after 476.816581ms: waiting for domain to come up
	I1101 10:10:22.151496  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:22.152390  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:22.152415  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:22.152947  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:22.152999  573367 retry.go:31] will retry after 833.583613ms: waiting for domain to come up
	I1101 10:10:22.990847  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:22.993494  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:22.993526  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:22.994107  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:22.994159  573367 retry.go:31] will retry after 1.018529043s: waiting for domain to come up
	I1101 10:10:24.014529  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:24.015417  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:24.015441  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:24.015925  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:24.015996  573367 retry.go:31] will retry after 1.077192285s: waiting for domain to come up
	I1101 10:10:25.095377  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:25.096265  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:25.096294  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:25.096701  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:25.096755  573367 retry.go:31] will retry after 1.602623159s: waiting for domain to come up
	I1101 10:10:24.045200  573081 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155066705s)
	I1101 10:10:24.045247  573081 crio.go:469] duration metric: took 2.155227341s to extract the tarball
	I1101 10:10:24.045264  573081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:10:24.094703  573081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:24.154204  573081 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:10:24.154234  573081 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:10:24.154245  573081 kubeadm.go:935] updating node { 192.168.50.181 8443 v1.28.0 crio true true} ...
	I1101 10:10:24.154362  573081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-080837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:10:24.154452  573081 ssh_runner.go:195] Run: crio config
	I1101 10:10:24.210069  573081 cni.go:84] Creating CNI manager for ""
	I1101 10:10:24.210103  573081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:24.210127  573081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:10:24.210159  573081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.181 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-080837 NodeName:old-k8s-version-080837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:10:24.210365  573081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-080837"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.181"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:10:24.210439  573081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 10:10:24.227451  573081 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:10:24.227533  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:10:24.244860  573081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1101 10:10:24.272097  573081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:10:24.298197  573081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1101 10:10:24.325280  573081 ssh_runner.go:195] Run: grep 192.168.50.181	control-plane.minikube.internal$ /etc/hosts
	I1101 10:10:24.331324  573081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:24.353149  573081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:24.511240  573081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:10:24.568507  573081 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837 for IP: 192.168.50.181
	I1101 10:10:24.568555  573081 certs.go:195] generating shared ca certs ...
	I1101 10:10:24.568576  573081 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.568785  573081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 10:10:24.568838  573081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 10:10:24.568854  573081 certs.go:257] generating profile certs ...
	I1101 10:10:24.568954  573081 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key
	I1101 10:10:24.568978  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt with IP's: []
	I1101 10:10:24.765658  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt ...
	I1101 10:10:24.765707  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: {Name:mk1a1034120b579b2a4a577dbc9b992e11805d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.766022  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key ...
	I1101 10:10:24.766055  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.key: {Name:mk3b6b73f944fe338084137f8ddbd9af97f63205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:24.766209  573081 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c
	I1101 10:10:24.766233  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.181]
	I1101 10:10:25.108541  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c ...
	I1101 10:10:25.108580  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c: {Name:mk9a3367b59928237e364f06f8dda75749781e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.108823  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c ...
	I1101 10:10:25.108850  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c: {Name:mk9a0c74876819df4909d796871b4b43ad1893eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.109075  573081 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt.aa0f939c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt
	I1101 10:10:25.109203  573081 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key.aa0f939c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key
	I1101 10:10:25.109282  573081 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key
	I1101 10:10:25.109302  573081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt with IP's: []
	I1101 10:10:25.467939  573081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt ...
	I1101 10:10:25.467975  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt: {Name:mkce620e5b8e626fe8a9fd6b7a8833c73ad2f572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.468210  573081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key ...
	I1101 10:10:25.468235  573081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key: {Name:mkbb6366934922a4c5c88d752402d6eea326e81b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:25.468452  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem (1338 bytes)
	W1101 10:10:25.468492  573081 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515_empty.pem, impossibly tiny 0 bytes
	I1101 10:10:25.468503  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:10:25.468526  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem (1078 bytes)
	I1101 10:10:25.468548  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:10:25.468569  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem (1675 bytes)
	I1101 10:10:25.468605  573081 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:25.469322  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:10:25.531164  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 10:10:25.591719  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:10:25.635978  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:10:25.675078  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 10:10:25.714022  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 10:10:25.756560  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:10:25.797181  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1101 10:10:25.836237  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/534515.pem --> /usr/share/ca-certificates/534515.pem (1338 bytes)
	I1101 10:10:25.875149  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /usr/share/ca-certificates/5345152.pem (1708 bytes)
	I1101 10:10:25.916111  573081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:10:25.952045  573081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:10:25.980514  573081 ssh_runner.go:195] Run: openssl version
	I1101 10:10:25.989143  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/534515.pem && ln -fs /usr/share/ca-certificates/534515.pem /etc/ssl/certs/534515.pem"
	I1101 10:10:26.009169  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.016861  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:07 /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.016959  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/534515.pem
	I1101 10:10:26.027157  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/534515.pem /etc/ssl/certs/51391683.0"
	I1101 10:10:26.045219  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5345152.pem && ln -fs /usr/share/ca-certificates/5345152.pem /etc/ssl/certs/5345152.pem"
	I1101 10:10:26.063937  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.070362  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:07 /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.070452  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5345152.pem
	I1101 10:10:26.080787  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5345152.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:10:26.096987  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:10:26.113377  573081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.120404  573081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:45 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.120495  573081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:10:26.130384  573081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:10:26.146534  573081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:10:26.153837  573081 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:10:26.153922  573081 kubeadm.go:401] StartCluster: {Name:old-k8s-version-080837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-080837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.181 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:10:26.154026  573081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:10:26.154129  573081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:10:26.210513  573081 cri.go:89] found id: ""
	I1101 10:10:26.210596  573081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:10:26.229849  573081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:10:26.250069  573081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:10:26.271499  573081 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:10:26.271524  573081 kubeadm.go:158] found existing configuration files:
	
	I1101 10:10:26.271581  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:10:26.289516  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:10:26.289609  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:10:26.312386  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:10:26.327559  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:10:26.327648  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:10:26.342307  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:10:26.354630  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:10:26.354710  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:10:26.367887  573081 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:10:26.382219  573081 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:10:26.382302  573081 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:10:26.399466  573081 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 10:10:26.495705  573081 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1101 10:10:26.495796  573081 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:10:26.667752  573081 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:10:26.667920  573081 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:10:26.668093  573081 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 10:10:26.948089  573081 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:10:26.950201  573081 out.go:252]   - Generating certificates and keys ...
	I1101 10:10:26.950378  573081 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:10:26.950490  573081 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:10:27.367709  573081 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:10:23.760891  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:23.761852  572974 api_server.go:269] stopped: https://192.168.61.122:8443/healthz: Get "https://192.168.61.122:8443/healthz": dial tcp 192.168.61.122:8443: connect: connection refused
	I1101 10:10:24.260369  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.184654  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:10:27.184691  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:10:27.184711  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.233641  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:10:27.233678  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:10:27.260961  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.293566  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:27.293612  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:27.761149  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:27.766516  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:27.766566  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:28.261096  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:28.267778  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:10:28.267811  572974 api_server.go:103] status: https://192.168.61.122:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:10:28.760569  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:28.766928  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 200:
	ok
	I1101 10:10:28.784336  572974 api_server.go:141] control plane version: v1.34.1
	I1101 10:10:28.784384  572974 api_server.go:131] duration metric: took 25.524192662s to wait for apiserver health ...
	I1101 10:10:28.784399  572974 cni.go:84] Creating CNI manager for ""
	I1101 10:10:28.784409  572974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:28.786288  572974 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 10:10:28.787778  572974 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:10:28.807668  572974 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:10:28.858743  572974 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:10:28.872269  572974 system_pods.go:59] 6 kube-system pods found
	I1101 10:10:28.872336  572974 system_pods.go:61] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:10:28.872350  572974 system_pods.go:61] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:28.872364  572974 system_pods.go:61] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:28.872405  572974 system_pods.go:61] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:28.872415  572974 system_pods.go:61] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:28.872425  572974 system_pods.go:61] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:28.872436  572974 system_pods.go:74] duration metric: took 13.65921ms to wait for pod list to return data ...
	I1101 10:10:28.872450  572974 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:10:28.883369  572974 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:10:28.883494  572974 node_conditions.go:123] node cpu capacity is 2
	I1101 10:10:28.883529  572974 node_conditions.go:105] duration metric: took 11.072649ms to run NodePressure ...
	I1101 10:10:28.883631  572974 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:10:29.259407  572974 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 10:10:29.263351  572974 kubeadm.go:744] kubelet initialised
	I1101 10:10:29.263384  572974 kubeadm.go:745] duration metric: took 3.941399ms waiting for restarted kubelet to initialise ...
	I1101 10:10:29.263418  572974 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:10:29.282179  572974 ops.go:34] apiserver oom_adj: -16
	I1101 10:10:29.282228  572974 kubeadm.go:602] duration metric: took 28.922007333s to restartPrimaryControlPlane
	I1101 10:10:29.282244  572974 kubeadm.go:403] duration metric: took 29.050771335s to StartCluster
	I1101 10:10:29.282281  572974 settings.go:142] acquiring lock: {Name:mke0bea80b55c21af3a3a0f83862cfe6da014dd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:29.282421  572974 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:10:29.284002  572974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/kubeconfig: {Name:mk1f1e6312f33030082fd627c6f74ca7eee16587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:29.284358  572974 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.122 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:10:29.284502  572974 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:10:29.284686  572974 config.go:182] Loaded profile config "pause-533709": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:29.286098  572974 out.go:179] * Enabled addons: 
	I1101 10:10:29.286111  572974 out.go:179] * Verifying Kubernetes components...
	I1101 10:10:27.823498  573081 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:10:27.942253  573081 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:10:28.042066  573081 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:10:28.412468  573081 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:10:28.412674  573081 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-080837] and IPs [192.168.50.181 127.0.0.1 ::1]
	I1101 10:10:28.495829  573081 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:10:28.496124  573081 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-080837] and IPs [192.168.50.181 127.0.0.1 ::1]
	I1101 10:10:28.609561  573081 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:10:28.805701  573081 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:10:29.134014  573081 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:10:29.134263  573081 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:10:29.234558  573081 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:10:29.520930  573081 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:10:29.616114  573081 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:10:29.835978  573081 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:10:29.836490  573081 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:10:29.841581  573081 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:10:26.700879  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:26.701728  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:26.701757  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:26.702177  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:26.702227  573367 retry.go:31] will retry after 1.917803915s: waiting for domain to come up
	I1101 10:10:28.622456  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:28.623337  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:28.623366  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:28.623786  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:28.623838  573367 retry.go:31] will retry after 2.32656352s: waiting for domain to come up
	I1101 10:10:30.953060  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:30.953692  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:30.953713  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:30.954073  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:30.954121  573367 retry.go:31] will retry after 2.612957344s: waiting for domain to come up
	I1101 10:10:29.843646  573081 out.go:252]   - Booting up control plane ...
	I1101 10:10:29.843796  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:10:29.843973  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:10:29.844080  573081 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:10:29.866035  573081 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:10:29.866869  573081 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:10:29.867012  573081 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:10:30.089773  573081 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 10:10:29.287328  572974 addons.go:515] duration metric: took 2.845537ms for enable addons: enabled=[]
	I1101 10:10:29.287414  572974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:29.611155  572974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:10:29.643411  572974 node_ready.go:35] waiting up to 6m0s for node "pause-533709" to be "Ready" ...
	I1101 10:10:29.647785  572974 node_ready.go:49] node "pause-533709" is "Ready"
	I1101 10:10:29.647829  572974 node_ready.go:38] duration metric: took 4.345239ms for node "pause-533709" to be "Ready" ...
	I1101 10:10:29.647849  572974 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:10:29.647930  572974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:10:29.683944  572974 api_server.go:72] duration metric: took 399.5333ms to wait for apiserver process to appear ...
	I1101 10:10:29.683985  572974 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:10:29.684019  572974 api_server.go:253] Checking apiserver healthz at https://192.168.61.122:8443/healthz ...
	I1101 10:10:29.691227  572974 api_server.go:279] https://192.168.61.122:8443/healthz returned 200:
	ok
	I1101 10:10:29.692710  572974 api_server.go:141] control plane version: v1.34.1
	I1101 10:10:29.692751  572974 api_server.go:131] duration metric: took 8.755244ms to wait for apiserver health ...
	I1101 10:10:29.692766  572974 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:10:29.697073  572974 system_pods.go:59] 6 kube-system pods found
	I1101 10:10:29.697107  572974 system_pods.go:61] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running
	I1101 10:10:29.697124  572974 system_pods.go:61] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:29.697135  572974 system_pods.go:61] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:29.697152  572974 system_pods.go:61] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:29.697164  572974 system_pods.go:61] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:29.697177  572974 system_pods.go:61] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:29.697192  572974 system_pods.go:74] duration metric: took 4.416487ms to wait for pod list to return data ...
	I1101 10:10:29.697208  572974 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:10:29.700913  572974 default_sa.go:45] found service account: "default"
	I1101 10:10:29.700941  572974 default_sa.go:55] duration metric: took 3.720898ms for default service account to be created ...
	I1101 10:10:29.700955  572974 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:10:29.705208  572974 system_pods.go:86] 6 kube-system pods found
	I1101 10:10:29.705245  572974 system_pods.go:89] "coredns-66bc5c9577-pzwdg" [6b3dc10c-d5ad-40f9-a28b-c4a89479f817] Running
	I1101 10:10:29.705261  572974 system_pods.go:89] "etcd-pause-533709" [784264db-f73a-4654-9e23-fe01943ce80b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:10:29.705271  572974 system_pods.go:89] "kube-apiserver-pause-533709" [5992caa7-9a4c-41a7-b093-d38008a71110] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:10:29.705281  572974 system_pods.go:89] "kube-controller-manager-pause-533709" [40074866-5117-49a2-800a-6091577fa142] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:10:29.705288  572974 system_pods.go:89] "kube-proxy-mkdfj" [1c0c82af-9116-41ce-9b01-bb2802550969] Running
	I1101 10:10:29.705298  572974 system_pods.go:89] "kube-scheduler-pause-533709" [4d3ac967-3992-4b4a-a4f7-bcaa03c9952b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:10:29.705314  572974 system_pods.go:126] duration metric: took 4.352365ms to wait for k8s-apps to be running ...
	I1101 10:10:29.705323  572974 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:10:29.705393  572974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:10:29.729774  572974 system_svc.go:56] duration metric: took 24.435914ms WaitForService to wait for kubelet
	I1101 10:10:29.729821  572974 kubeadm.go:587] duration metric: took 445.416986ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:10:29.729846  572974 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:10:29.734041  572974 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:10:29.734080  572974 node_conditions.go:123] node cpu capacity is 2
	I1101 10:10:29.734098  572974 node_conditions.go:105] duration metric: took 4.242863ms to run NodePressure ...
	I1101 10:10:29.734116  572974 start.go:242] waiting for startup goroutines ...
	I1101 10:10:29.734127  572974 start.go:247] waiting for cluster config update ...
	I1101 10:10:29.734137  572974 start.go:256] writing updated cluster config ...
	I1101 10:10:29.734601  572974 ssh_runner.go:195] Run: rm -f paused
	I1101 10:10:29.741162  572974 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:10:29.742097  572974 kapi.go:59] client config for pause-533709: &rest.Config{Host:"https://192.168.61.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/client.crt", KeyFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/profiles/pause-533709/client.key", CAFile:"/home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:10:29.746510  572974 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pzwdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:29.754491  572974 pod_ready.go:94] pod "coredns-66bc5c9577-pzwdg" is "Ready"
	I1101 10:10:29.754524  572974 pod_ready.go:86] duration metric: took 7.982196ms for pod "coredns-66bc5c9577-pzwdg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:29.758226  572974 pod_ready.go:83] waiting for pod "etcd-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:10:31.766356  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:33.570088  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:33.570757  573367 main.go:143] libmachine: no network interface addresses found for domain embed-certs-468183 (source=lease)
	I1101 10:10:33.570784  573367 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:10:33.571144  573367 main.go:143] libmachine: unable to find current IP address of domain embed-certs-468183 in network mk-embed-certs-468183 (interfaces detected: [])
	I1101 10:10:33.571204  573367 retry.go:31] will retry after 3.631652192s: waiting for domain to come up
	I1101 10:10:36.590043  573081 kubeadm.go:319] [apiclient] All control plane components are healthy after 6.503586 seconds
	I1101 10:10:36.590152  573081 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:10:36.606263  573081 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:10:37.151664  573081 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:10:37.151963  573081 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-080837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:10:37.667862  573081 kubeadm.go:319] [bootstrap-token] Using token: c3xn7r.p0qb0h17147juwlw
	I1101 10:10:37.670015  573081 out.go:252]   - Configuring RBAC rules ...
	I1101 10:10:37.670144  573081 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:10:37.679290  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:10:37.690353  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:10:37.693827  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:10:37.697268  573081 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:10:37.703428  573081 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:10:37.731482  573081 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:10:38.066404  573081 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:10:38.127542  573081 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:10:38.131546  573081 kubeadm.go:319] 
	I1101 10:10:38.131642  573081 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:10:38.131653  573081 kubeadm.go:319] 
	I1101 10:10:38.131755  573081 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:10:38.131776  573081 kubeadm.go:319] 
	I1101 10:10:38.131800  573081 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:10:38.131856  573081 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:10:38.131915  573081 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:10:38.131922  573081 kubeadm.go:319] 
	I1101 10:10:38.131982  573081 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:10:38.132004  573081 kubeadm.go:319] 
	I1101 10:10:38.132092  573081 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:10:38.132104  573081 kubeadm.go:319] 
	I1101 10:10:38.132181  573081 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:10:38.132298  573081 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:10:38.132394  573081 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:10:38.132404  573081 kubeadm.go:319] 
	I1101 10:10:38.132547  573081 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:10:38.132657  573081 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:10:38.132670  573081 kubeadm.go:319] 
	I1101 10:10:38.132804  573081 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token c3xn7r.p0qb0h17147juwlw \
	I1101 10:10:38.132973  573081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d \
	I1101 10:10:38.133016  573081 kubeadm.go:319] 	--control-plane 
	I1101 10:10:38.133028  573081 kubeadm.go:319] 
	I1101 10:10:38.133141  573081 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:10:38.133164  573081 kubeadm.go:319] 
	I1101 10:10:38.133298  573081 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token c3xn7r.p0qb0h17147juwlw \
	I1101 10:10:38.133482  573081 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:56aa18b20985495d814b65ba7a2f910118620c74c98b944601f44598a9c0be1d 
	I1101 10:10:38.142224  573081 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:10:38.142255  573081 cni.go:84] Creating CNI manager for ""
	I1101 10:10:38.142266  573081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:38.143968  573081 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	W1101 10:10:34.264295  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:36.267210  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:38.267588  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:37.204707  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.205563  573367 main.go:143] libmachine: domain embed-certs-468183 has current primary IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.205582  573367 main.go:143] libmachine: found domain IP: 192.168.83.42
	I1101 10:10:37.205589  573367 main.go:143] libmachine: reserving static IP address...
	I1101 10:10:37.206043  573367 main.go:143] libmachine: unable to find host DHCP lease matching {name: "embed-certs-468183", mac: "52:54:00:78:7b:11", ip: "192.168.83.42"} in network mk-embed-certs-468183
	I1101 10:10:37.449248  573367 main.go:143] libmachine: reserved static IP address 192.168.83.42 for domain embed-certs-468183
	I1101 10:10:37.449272  573367 main.go:143] libmachine: waiting for SSH...
	I1101 10:10:37.449278  573367 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:10:37.452780  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.453341  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.453380  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.453815  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.454096  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.454110  573367 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:10:37.575365  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:37.575741  573367 main.go:143] libmachine: domain creation complete
	I1101 10:10:37.577584  573367 machine.go:94] provisionDockerMachine start ...
	I1101 10:10:37.580567  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.581068  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.581093  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.581266  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.581460  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.581477  573367 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:10:37.702233  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:10:37.702283  573367 buildroot.go:166] provisioning hostname "embed-certs-468183"
	I1101 10:10:37.706516  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.707122  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.707158  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.707469  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.707716  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.707734  573367 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-468183 && echo "embed-certs-468183" | sudo tee /etc/hostname
	I1101 10:10:37.859466  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-468183
	
	I1101 10:10:37.863437  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.863984  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:37.864034  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:37.864294  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:37.864613  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:37.864646  573367 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-468183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-468183/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-468183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:10:37.999317  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:10:37.999359  573367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21833-530629/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-530629/.minikube}
	I1101 10:10:37.999436  573367 buildroot.go:174] setting up certificates
	I1101 10:10:37.999453  573367 provision.go:84] configureAuth start
	I1101 10:10:38.003211  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.003771  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.003812  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.007093  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.007729  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.007778  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.008010  573367 provision.go:143] copyHostCerts
	I1101 10:10:38.008083  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem, removing ...
	I1101 10:10:38.008114  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem
	I1101 10:10:38.008207  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/ca.pem (1078 bytes)
	I1101 10:10:38.008340  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem, removing ...
	I1101 10:10:38.008353  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem
	I1101 10:10:38.008401  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/cert.pem (1123 bytes)
	I1101 10:10:38.008498  573367 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem, removing ...
	I1101 10:10:38.008508  573367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem
	I1101 10:10:38.009134  573367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-530629/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-530629/.minikube/key.pem (1675 bytes)
	I1101 10:10:38.009265  573367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca-key.pem org=jenkins.embed-certs-468183 san=[127.0.0.1 192.168.83.42 embed-certs-468183 localhost minikube]
	I1101 10:10:38.219217  573367 provision.go:177] copyRemoteCerts
	I1101 10:10:38.219323  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:10:38.223171  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.223663  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.223700  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.223877  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:38.333177  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 10:10:38.371609  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1101 10:10:38.405296  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 10:10:38.443379  573367 provision.go:87] duration metric: took 443.904241ms to configureAuth
	I1101 10:10:38.443419  573367 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:10:38.443645  573367 config.go:182] Loaded profile config "embed-certs-468183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:10:38.447079  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.447550  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.447586  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.447817  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:38.448133  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:38.448160  573367 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:10:38.726679  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:10:38.726726  573367 machine.go:97] duration metric: took 1.149121484s to provisionDockerMachine
	I1101 10:10:38.726742  573367 client.go:176] duration metric: took 20.737379235s to LocalClient.Create
	I1101 10:10:38.726772  573367 start.go:167] duration metric: took 20.737458211s to libmachine.API.Create "embed-certs-468183"
	I1101 10:10:38.726783  573367 start.go:293] postStartSetup for "embed-certs-468183" (driver="kvm2")
	I1101 10:10:38.726797  573367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:10:38.726886  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:10:38.730826  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.731390  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.731430  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.731618  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:38.825200  573367 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:10:38.831147  573367 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:10:38.831175  573367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/addons for local assets ...
	I1101 10:10:38.831253  573367 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-530629/.minikube/files for local assets ...
	I1101 10:10:38.831345  573367 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem -> 5345152.pem in /etc/ssl/certs
	I1101 10:10:38.831488  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:10:38.846545  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/ssl/certs/5345152.pem --> /etc/ssl/certs/5345152.pem (1708 bytes)
	I1101 10:10:38.879622  573367 start.go:296] duration metric: took 152.765264ms for postStartSetup
	I1101 10:10:38.883210  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.883643  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.883683  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.884040  573367 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/config.json ...
	I1101 10:10:38.884336  573367 start.go:128] duration metric: took 20.898036527s to createHost
	I1101 10:10:38.887283  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.887648  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:38.887678  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:38.887877  573367 main.go:143] libmachine: Using SSH client type: native
	I1101 10:10:38.888155  573367 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.42 22 <nil> <nil>}
	I1101 10:10:38.888172  573367 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:10:39.010210  573367 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761991838.963449647
	
	I1101 10:10:39.010239  573367 fix.go:216] guest clock: 1761991838.963449647
	I1101 10:10:39.010247  573367 fix.go:229] Guest: 2025-11-01 10:10:38.963449647 +0000 UTC Remote: 2025-11-01 10:10:38.884356723 +0000 UTC m=+27.582695542 (delta=79.092924ms)
	I1101 10:10:39.010269  573367 fix.go:200] guest clock delta is within tolerance: 79.092924ms
	I1101 10:10:39.010275  573367 start.go:83] releasing machines lock for "embed-certs-468183", held for 21.024175042s
	I1101 10:10:39.013132  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.013543  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.013563  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.014104  573367 ssh_runner.go:195] Run: cat /version.json
	I1101 10:10:39.014199  573367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:10:39.017789  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.017986  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018319  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.018353  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018438  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:39.018502  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:39.018551  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:39.018789  573367 sshutil.go:53] new ssh client: &{IP:192.168.83.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/embed-certs-468183/id_rsa Username:docker}
	I1101 10:10:39.130563  573367 ssh_runner.go:195] Run: systemctl --version
	I1101 10:10:39.137802  573367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:10:39.306713  573367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:10:39.315528  573367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:10:39.315610  573367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:10:39.338455  573367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:10:39.338487  573367 start.go:496] detecting cgroup driver to use...
	I1101 10:10:39.338581  573367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:10:39.370628  573367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:10:39.390305  573367 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:10:39.390369  573367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:10:39.409945  573367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:10:39.428368  573367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:10:39.615872  573367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:10:39.841366  573367 docker.go:234] disabling docker service ...
	I1101 10:10:39.841455  573367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:10:39.858136  573367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:10:39.874425  573367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:10:40.037612  573367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:10:40.191403  573367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:10:40.208814  573367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:10:40.234701  573367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:10:40.234767  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.248073  573367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:10:40.248155  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.261116  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.275410  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.288371  573367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:10:40.302378  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.316102  573367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.339004  573367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:10:40.353182  573367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:10:40.364252  573367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:10:40.364322  573367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:10:40.397688  573367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:10:40.425139  573367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:40.598185  573367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:10:40.731129  573367 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:10:40.731216  573367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:10:40.739606  573367 start.go:564] Will wait 60s for crictl version
	I1101 10:10:40.739699  573367 ssh_runner.go:195] Run: which crictl
	I1101 10:10:40.745050  573367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:10:40.797153  573367 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:10:40.797241  573367 ssh_runner.go:195] Run: crio --version
	I1101 10:10:40.832657  573367 ssh_runner.go:195] Run: crio --version
	I1101 10:10:40.867706  573367 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 10:10:40.871830  573367 main.go:143] libmachine: domain embed-certs-468183 has defined MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:40.872400  573367 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:7b:11", ip: ""} in network mk-embed-certs-468183: {Iface:virbr5 ExpiryTime:2025-11-01 11:10:35 +0000 UTC Type:0 Mac:52:54:00:78:7b:11 Iaid: IPaddr:192.168.83.42 Prefix:24 Hostname:embed-certs-468183 Clientid:01:52:54:00:78:7b:11}
	I1101 10:10:40.872428  573367 main.go:143] libmachine: domain embed-certs-468183 has defined IP address 192.168.83.42 and MAC address 52:54:00:78:7b:11 in network mk-embed-certs-468183
	I1101 10:10:40.872615  573367 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 10:10:40.877856  573367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:40.894755  573367 kubeadm.go:884] updating cluster {Name:embed-certs-468183 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:10:40.894873  573367 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:10:40.894947  573367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:40.935225  573367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:10:40.935318  573367 ssh_runner.go:195] Run: which lz4
	I1101 10:10:40.940417  573367 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:10:40.946039  573367 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:10:40.946075  573367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 10:10:38.145009  573081 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:10:38.178201  573081 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:10:38.250010  573081 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:10:38.250072  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:38.250078  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-080837 minikube.k8s.io/updated_at=2025_11_01T10_10_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=old-k8s-version-080837 minikube.k8s.io/primary=true
	I1101 10:10:38.639939  573081 ops.go:34] apiserver oom_adj: -16
	I1101 10:10:38.639975  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:39.140114  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:39.641026  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:40.140659  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:40.640388  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:41.140872  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:41.640091  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:42.140089  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:10:42.640064  573081 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1101 10:10:40.765670  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	W1101 10:10:42.765732  572974 pod_ready.go:104] pod "etcd-pause-533709" is not "Ready", error: <nil>
	I1101 10:10:43.266700  572974 pod_ready.go:94] pod "etcd-pause-533709" is "Ready"
	I1101 10:10:43.266752  572974 pod_ready.go:86] duration metric: took 13.508494921s for pod "etcd-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.269375  572974 pod_ready.go:83] waiting for pod "kube-apiserver-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.275936  572974 pod_ready.go:94] pod "kube-apiserver-pause-533709" is "Ready"
	I1101 10:10:43.275975  572974 pod_ready.go:86] duration metric: took 6.561549ms for pod "kube-apiserver-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.278319  572974 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.283272  572974 pod_ready.go:94] pod "kube-controller-manager-pause-533709" is "Ready"
	I1101 10:10:43.283308  572974 pod_ready.go:86] duration metric: took 4.963094ms for pod "kube-controller-manager-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.287746  572974 pod_ready.go:83] waiting for pod "kube-proxy-mkdfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.462766  572974 pod_ready.go:94] pod "kube-proxy-mkdfj" is "Ready"
	I1101 10:10:43.462802  572974 pod_ready.go:86] duration metric: took 175.021381ms for pod "kube-proxy-mkdfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:43.663406  572974 pod_ready.go:83] waiting for pod "kube-scheduler-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:44.062440  572974 pod_ready.go:94] pod "kube-scheduler-pause-533709" is "Ready"
	I1101 10:10:44.062481  572974 pod_ready.go:86] duration metric: took 399.032223ms for pod "kube-scheduler-pause-533709" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:10:44.062497  572974 pod_ready.go:40] duration metric: took 14.321295721s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:10:44.116861  572974 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:10:44.119113  572974 out.go:179] * Done! kubectl is now configured to use "pause-533709" cluster and "default" namespace by default
	I1101 10:10:42.775732  573367 crio.go:462] duration metric: took 1.83534813s to copy over tarball
	I1101 10:10:42.775817  573367 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:10:44.616315  573367 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.840456288s)
	I1101 10:10:44.616357  573367 crio.go:469] duration metric: took 1.840594264s to extract the tarball
	I1101 10:10:44.616369  573367 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:10:44.665105  573367 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:10:44.722754  573367 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:10:44.722782  573367 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:10:44.722791  573367 kubeadm.go:935] updating node { 192.168.83.42 8443 v1.34.1 crio true true} ...
	I1101 10:10:44.722960  573367 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-468183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-468183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:10:44.723065  573367 ssh_runner.go:195] Run: crio config
	I1101 10:10:44.791843  573367 cni.go:84] Creating CNI manager for ""
	I1101 10:10:44.791884  573367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:10:44.791932  573367 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:10:44.791968  573367 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.42 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-468183 NodeName:embed-certs-468183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:10:44.792167  573367 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-468183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.42"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.42"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:10:44.792266  573367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:10:44.807712  573367 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:10:44.807799  573367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:10:44.822776  573367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1101 10:10:44.846016  573367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:10:44.868791  573367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1101 10:10:44.893217  573367 ssh_runner.go:195] Run: grep 192.168.83.42	control-plane.minikube.internal$ /etc/hosts
	I1101 10:10:44.898618  573367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:10:44.916742  573367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:10:45.087975  573367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:10:45.111815  573367 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183 for IP: 192.168.83.42
	I1101 10:10:45.111850  573367 certs.go:195] generating shared ca certs ...
	I1101 10:10:45.111878  573367 certs.go:227] acquiring lock for ca certs: {Name:mkfa41f6ee02a6d4adbbbd414d6f4b29bf47b076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:45.112125  573367 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.key
	I1101 10:10:45.112206  573367 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-530629/.minikube/proxy-client-ca.key
	I1101 10:10:45.112224  573367 certs.go:257] generating profile certs ...
	I1101 10:10:45.112336  573367 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.key
	I1101 10:10:45.112360  573367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.crt with IP's: []
	I1101 10:10:45.531828  573367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.crt ...
	I1101 10:10:45.531858  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.crt: {Name:mk3d71367b0f582bcf2e30c956f2d9f3ba1abb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:45.532105  573367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.key ...
	I1101 10:10:45.532128  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/client.key: {Name:mk37e47c3275b49359d801528f9a442411f1b540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:45.532308  573367 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.key.cc24d00c
	I1101 10:10:45.532335  573367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.crt.cc24d00c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.42]
	I1101 10:10:45.842494  573367 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.crt.cc24d00c ...
	I1101 10:10:45.842525  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.crt.cc24d00c: {Name:mk3e8e0b2c20e6891d2c07262c8d6c7192ebde8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:45.843078  573367 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.key.cc24d00c ...
	I1101 10:10:45.843108  573367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.key.cc24d00c: {Name:mkadb9c774c85c6a1df05ef5dbe4b5378c246fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:10:45.843276  573367 certs.go:382] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.crt.cc24d00c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.crt
	I1101 10:10:45.843403  573367 certs.go:386] copying /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.key.cc24d00c -> /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/apiserver.key
	I1101 10:10:45.843520  573367 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/proxy-client.key
	I1101 10:10:45.843549  573367 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/embed-certs-468183/proxy-client.crt with IP's: []
	
	
	==> CRI-O <==
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.574476933Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-pzwdg,Uid:6b3dc10c-d5ad-40f9-a28b-c4a89479f817,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991828398811301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:28.076627666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-533709,Uid:3c9f43c330cbd2ee80a698ee9579baec,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991823040807216,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3c9f43c330cbd2ee80a698ee9579baec,kubernetes.io/config.seen: 2025-11-01T10:10:03.075472038Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-533709,Uid:991b90746afec243940c42caa25f71de,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991822978942695,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec
243940c42caa25f71de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 991b90746afec243940c42caa25f71de,kubernetes.io/config.seen: 2025-11-01T10:10:03.075470485Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee4cd9b27fade8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&PodSandboxMetadata{Name:kube-proxy-mkdfj,Uid:1c0c82af-9116-41ce-9b01-bb2802550969,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799645911332,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:09:11.363833927Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&PodSan
dboxMetadata{Name:etcd-pause-533709,Uid:e724985d54b20b982f6f22f4e5940b63,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799640717060,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.122:2379,kubernetes.io/config.hash: e724985d54b20b982f6f22f4e5940b63,kubernetes.io/config.seen: 2025-11-01T10:09:06.061345382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-533709,Uid:63f0943a93b3ceab023a59b1a3fb2aeb,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991799627802220,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.122:8443,kubernetes.io/config.hash: 63f0943a93b3ceab023a59b1a3fb2aeb,kubernetes.io/config.seen: 2025-11-01T10:09:06.061348370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-pzwdg,Uid:6b3dc10c-d5ad-40f9-a28b-c4a89479f817,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991751960045128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-11-01T10:09:11.617390336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d203de259fd7beced0081a6b72a8380e658506af2be59b4d3931ac91b8b4be47,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-9jvqd,Uid:cfc1852e-2f25-4ca0-82a1-28fcef5b5bb8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991751875889740,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-9jvqd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfc1852e-2f25-4ca0-82a1-28fcef5b5bb8,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:09:11.517419485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&PodSandboxMetadata{Name:kube-proxy-mkdfj,Uid:1c0c82af-9116-41ce-9b01-bb2802550969,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991751700568939,
Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:09:11.363833927Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e51a70d4e7eaee3611780ee7f117930e472f8ccab7ffe7a567e9b05e619d0664,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-533709,Uid:63f0943a93b3ceab023a59b1a3fb2aeb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991739955552842,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.adverti
se-address.endpoint: 192.168.61.122:8443,kubernetes.io/config.hash: 63f0943a93b3ceab023a59b1a3fb2aeb,kubernetes.io/config.seen: 2025-11-01T10:08:59.366310209Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-533709,Uid:3c9f43c330cbd2ee80a698ee9579baec,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991739927254740,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3c9f43c330cbd2ee80a698ee9579baec,kubernetes.io/config.seen: 2025-11-01T10:08:59.366307720Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadat
a:&PodSandboxMetadata{Name:kube-controller-manager-pause-533709,Uid:991b90746afec243940c42caa25f71de,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991739926036697,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 991b90746afec243940c42caa25f71de,kubernetes.io/config.seen: 2025-11-01T10:08:59.366304131Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ce7264d4-fd78-4251-9b79-8f919c535a49 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.575812261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64b36711-c809-411b-b577-95dd7ff52669 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.575871931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64b36711-c809-411b-b577-95dd7ff52669 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.576743169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64b36711-c809-411b-b577-95dd7ff52669 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.579447060Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b47a7a0-81be-4c3e-bd02-ee25550a3d1e name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.579521530Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b47a7a0-81be-4c3e-bd02-ee25550a3d1e name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.580745473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db32b0d7-7d04-4693-97d7-eea6a980bead name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.581430151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991847581343866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db32b0d7-7d04-4693-97d7-eea6a980bead name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.581894707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d59dc02f-d707-4ba8-8bc3-babd115bdb84 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.581939910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d59dc02f-d707-4ba8-8bc3-babd115bdb84 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.582208256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d59dc02f-d707-4ba8-8bc3-babd115bdb84 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.637136020Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ebdf91d-94ba-4354-8adf-4b93268fb750 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.637250032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ebdf91d-94ba-4354-8adf-4b93268fb750 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.639584662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24c4e40e-9bfe-46e8-95d5-6a6e921abbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.640178113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991847640150807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24c4e40e-9bfe-46e8-95d5-6a6e921abbd2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.640878628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f962f3fb-0943-4304-b2ef-064d9c6ad132 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.640952402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f962f3fb-0943-4304-b2ef-064d9c6ad132 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.641343439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f962f3fb-0943-4304-b2ef-064d9c6ad132 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.700180423Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a753ce1-6ea1-4dfa-b42b-c75d31ca9615 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.700352289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a753ce1-6ea1-4dfa-b42b-c75d31ca9615 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.703466427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e3a50e4-e135-4508-adb0-635718de2d01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.704514236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991847704478115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e3a50e4-e135-4508-adb0-635718de2d01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.705582761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cd8e475-10b9-43bb-87d2-70377adc5055 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.705658421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cd8e475-10b9-43bb-87d2-70377adc5055 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:10:47 pause-533709 crio[3025]: time="2025-11-01 10:10:47.705886459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383,PodSandboxId:448d1985ab76739bb42ffccbdf35736a33a142fe1b998d80620735bb7649be34,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991828721437670,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21afb6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991823438420171,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096,PodSandboxId:83d5711c535731e8d3191ea42d2a1c3caaa12b17b331b2b206c4eecabc89d3e8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9
965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991823297216243,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304,PodSandboxId:0140522956c8ab14f515fbd76a0547b021cd5568bed001ba350245da916a0023,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d06195
38f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991823267168173,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991822954918309,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba,PodSandboxId:ee4cd9b27fade
8ce46d92db9f4a50569a2b873fb044aa09a979db8f7eaeb5cf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991800668520680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef,PodSandboxId:a0e82a8a822cb3c6aee643fce25b92f1858fd3ddaf21af
b6bbc30bad5c755ffe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761991800647213719,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f0943a93b3ceab023a59b1a3fb2aeb,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b993b8fbb2d6a2d30b60ce
04571b393da5a12345208c74d4d9c42e72514262a7,PodSandboxId:b6692431ababa239bd5dd47ad4baff5026564b431363e83a09cd90bf0fed9363,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991799913118772,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e724985d54b20b982f6f22f4e5940b63,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221,PodSandboxId:bae233ae556260c4e88d2193e20561540e04294d35472b5f5d6a4cff2e0a6764,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991753185539055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-pzwdg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b3dc10c-d5ad-40f9-a28b-c4a89479f817,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"proto
col\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e,PodSandboxId:6d8c483d789c5de14f76a3b12b920558e75cc4044bf2fc60ffb5c50b86e70116,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991752206049321,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mkdfj,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 1c0c82af-9116-41ce-9b01-bb2802550969,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc,PodSandboxId:6af75b2f4b1cdc4fddac2fc53200bd4bc81161be7df022fe2f37b6831035bf6e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991740251984912,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 3c9f43c330cbd2ee80a698ee9579baec,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458,PodSandboxId:5579fe961309082acb8f8271e1d22873a81d3ad15b76f11154982cadcc549444,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991740187907105,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-533709,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 991b90746afec243940c42caa25f71de,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cd8e475-10b9-43bb-87d2-70377adc5055 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	498354aef4312       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago       Running             coredns                   1                   448d1985ab767       coredns-66bc5c9577-pzwdg
	b8ffdea27f223       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   24 seconds ago       Running             kube-apiserver            2                   a0e82a8a822cb       kube-apiserver-pause-533709
	e204777a5b47a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   24 seconds ago       Running             kube-scheduler            1                   83d5711c53573       kube-scheduler-pause-533709
	d769cb43b90bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   24 seconds ago       Running             kube-controller-manager   1                   0140522956c8a       kube-controller-manager-pause-533709
	98ff14e18b089       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   25 seconds ago       Running             etcd                      2                   b6692431ababa       etcd-pause-533709
	429c6ef4a6c57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   47 seconds ago       Running             kube-proxy                1                   ee4cd9b27fade       kube-proxy-mkdfj
	1c7105be5e4c2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   47 seconds ago       Exited              kube-apiserver            1                   a0e82a8a822cb       kube-apiserver-pause-533709
	b993b8fbb2d6a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   48 seconds ago       Exited              etcd                      1                   b6692431ababa       etcd-pause-533709
	aa8db6bc66adc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   bae233ae55626       coredns-66bc5c9577-pzwdg
	8bbaa009a8d7c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   6d8c483d789c5       kube-proxy-mkdfj
	e362762826b71       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   6af75b2f4b1cd       kube-scheduler-pause-533709
	877956ec3f06e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   About a minute ago   Exited              kube-controller-manager   0                   5579fe9613090       kube-controller-manager-pause-533709
	
	
	==> coredns [498354aef4312f07ed2c76ce63c5943b9749ab20856b79300060015652003383] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8b8641eae0af5337389aa76a78f71d2e2a7bd54cc199277be5abe199aebbfd3c9e156259680c91eb397a4c282437fd35af249d42857043b32bf3beb690ad2f54
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53740 - 29776 "HINFO IN 8738413579764429072.474708066066726926. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.04444825s
	
	
	==> coredns [aa8db6bc66adcb7f5314b8afd3ae06e27b6df6b2f45271c09a78271c6e6aa221] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-533709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-533709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-533709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_09_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:09:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-533709
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:10:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:10:27 +0000   Sat, 01 Nov 2025 10:09:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.122
	  Hostname:    pause-533709
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3eb95cfe69ba40d3929f224e1e009616
	  System UUID:                3eb95cfe-69ba-40d3-929f-224e1e009616
	  Boot ID:                    eeb63cf3-e129-4c77-9267-60c4a9a96166
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-pzwdg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     97s
	  kube-system                 etcd-pause-533709                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         102s
	  kube-system                 kube-apiserver-pause-533709             250m (12%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-controller-manager-pause-533709    200m (10%)    0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-proxy-mkdfj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-pause-533709             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 95s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     102s               kubelet          Node pause-533709 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  102s               kubelet          Node pause-533709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s               kubelet          Node pause-533709 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 102s               kubelet          Starting kubelet.
	  Normal  NodeReady                101s               kubelet          Node pause-533709 status is now: NodeReady
	  Normal  RegisteredNode           98s                node-controller  Node pause-533709 event: Registered Node pause-533709 in Controller
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node pause-533709 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node pause-533709 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node pause-533709 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-533709 event: Registered Node pause-533709 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:08] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000029] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008164] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.177735] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.093715] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.113484] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.095666] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 1 10:09] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.052905] kauditd_printk_skb: 18 callbacks suppressed
	[ +11.260167] kauditd_printk_skb: 213 callbacks suppressed
	[ +23.031092] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 10:10] kauditd_printk_skb: 326 callbacks suppressed
	[ +19.530644] kauditd_printk_skb: 17 callbacks suppressed
	[  +4.537286] kauditd_printk_skb: 81 callbacks suppressed
	
	
	==> etcd [98ff14e18b0891e76d4fd2fadbbac3a6e50f2d02759c03d0fb851ab167f8fbf3] <==
	{"level":"warn","ts":"2025-11-01T10:10:25.881756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.905416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.925762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.946417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.978758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:25.993039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.037507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.049044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.067014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.084904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.100583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.127807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.148187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.163042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.181392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.199335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.218031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.231780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.269912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.287133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.313253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:26.490697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:10:47.858262Z","caller":"traceutil/trace.go:172","msg":"trace[728674275] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"172.046955ms","start":"2025-11-01T10:10:47.686195Z","end":"2025-11-01T10:10:47.858242Z","steps":["trace[728674275] 'process raft request'  (duration: 171.817372ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:10:48.005171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.090448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16416916136602357648 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-533709\" mod_revision:475 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-533709\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-533709\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:10:48.005602Z","caller":"traceutil/trace.go:172","msg":"trace[2123541239] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"233.311065ms","start":"2025-11-01T10:10:47.772274Z","end":"2025-11-01T10:10:48.005585Z","steps":["trace[2123541239] 'process raft request'  (duration: 111.663994ms)","trace[2123541239] 'compare'  (duration: 118.982559ms)"],"step_count":2}
	
	
	==> etcd [b993b8fbb2d6a2d30b60ce04571b393da5a12345208c74d4d9c42e72514262a7] <==
	{"level":"info","ts":"2025-11-01T10:10:00.119705Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","recovered-remote-peer-id":"4bca7de7de23e3d4","recovered-remote-peer-urls":["https://192.168.61.122:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T10:10:00.119718Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.119727Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-11-01T10:10:00.119758Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-11-01T10:10:00.119820Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"4bca7de7de23e3d4 switched to configuration voters=()"}
	{"level":"info","ts":"2025-11-01T10:10:00.119866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"4bca7de7de23e3d4 became follower at term 2"}
	{"level":"info","ts":"2025-11-01T10:10:00.119874Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft 4bca7de7de23e3d4 [peers: [], term: 2, commit: 426, applied: 0, lastindex: 426, lastterm: 2]"}
	{"level":"warn","ts":"2025-11-01T10:10:00.123703Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-11-01T10:10:00.130248Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":407}
	{"level":"info","ts":"2025-11-01T10:10:00.137861Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-11-01T10:10:00.138732Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"4bca7de7de23e3d4","timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:10:00.139248Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"4bca7de7de23e3d4"}
	{"level":"info","ts":"2025-11-01T10:10:00.139335Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"4bca7de7de23e3d4","local-server-version":"3.6.4","cluster-id":"8cd2c58e2f5d4822","cluster-version":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.139767Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"4bca7de7de23e3d4 switched to configuration voters=(5461315932957959124)"}
	{"level":"info","ts":"2025-11-01T10:10:00.139880Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:10:00.139901Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","added-peer-id":"4bca7de7de23e3d4","added-peer-peer-urls":["https://192.168.61.122:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-11-01T10:10:00.140121Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"8cd2c58e2f5d4822","local-member-id":"4bca7de7de23e3d4","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-11-01T10:10:00.140583Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"4bca7de7de23e3d4","initial-advertise-peer-urls":["https://192.168.61.122:2380"],"listen-peer-urls":["https://192.168.61.122:2380"],"advertise-client-urls":["https://192.168.61.122:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.122:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:10:00.140670Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:10:00.140795Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"4bca7de7de23e3d4","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-01T10:10:00.140978Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.141017Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.141032Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:10:00.143472Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.61.122:2380"}
	{"level":"info","ts":"2025-11-01T10:10:00.143993Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.61.122:2380"}
	
	
	==> kernel <==
	 10:10:49 up 2 min,  0 users,  load average: 1.28, 0.54, 0.20
	Linux pause-533709 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1c7105be5e4c21dc0972f008c4fd1f88839a3bcfa0f60e4c6cf4063c49a283ef] <==
	W1101 10:10:01.743515       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 10:10:01.744193       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1101 10:10:01.745587       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 10:10:01.770932       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 10:10:01.754891       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:10:01.781198       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 10:10:01.781797       1 instance.go:239] Using reconciler: lease
	W1101 10:10:01.783746       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:01.784535       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.743904       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.744878       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:02.785715       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.089978       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.228884       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:04.385871       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:06.780167       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:06.942545       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:07.126648       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.506462       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.793716       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:10.969190       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:16.488761       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:16.584396       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 10:10:18.570186       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1101 10:10:21.784380       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b8ffdea27f223ff335f1028f2c3f5349fd3a05ea5e4ca994148b67c06ef30019] <==
	I1101 10:10:27.267443       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 10:10:27.268285       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:10:27.268315       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:10:27.268321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:10:27.268327       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:10:27.269386       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:10:27.269479       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:10:27.314151       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:10:27.314249       1 policy_source.go:240] refreshing policies
	I1101 10:10:27.329001       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 10:10:27.329996       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 10:10:27.330148       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:10:27.334261       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 10:10:27.335963       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:10:27.337353       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:10:27.345140       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:10:27.353413       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:10:28.136859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:10:28.140416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:10:29.113280       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:10:29.172340       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:10:29.220792       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:10:29.232530       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:10:30.854541       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:10:30.905167       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [877956ec3f06ed232e4f3b24002a100db3b52c5d04bbdac7f73bc031d79d7458] <==
	I1101 10:09:10.321122       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:09:10.321489       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:09:10.321503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:09:10.322388       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:09:10.324228       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:09:10.324938       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:09:10.325053       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:09:10.326679       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:09:10.326799       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:09:10.326923       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:09:10.327014       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:09:10.327022       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:09:10.328003       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:09:10.332468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:09:10.340672       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:09:10.358741       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-533709" podCIDRs=["10.244.0.0/24"]
	I1101 10:09:10.362498       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:09:10.362670       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:09:10.362706       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:09:10.368706       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:09:10.369959       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 10:09:10.374896       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 10:09:10.381918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:09:10.388161       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:09:10.388284       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [d769cb43b90bbe32810e36eb46267c2143eb8836ca85a96afb4bf2f7172db304] <==
	I1101 10:10:30.609541       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:10:30.609754       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:30.613171       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:10:30.614043       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:10:30.615382       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:10:30.622129       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:10:30.630816       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:10:30.633336       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:10:30.633381       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:10:30.636544       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:10:30.642837       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:10:30.646271       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:10:30.648779       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:10:30.650915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:30.651521       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:10:30.651552       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:10:30.652484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:30.652495       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:10:30.652500       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:10:30.652568       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 10:10:30.654158       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:10:30.659877       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:10:30.661893       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:10:30.667566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:30.670147       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [429c6ef4a6c572cb4492e1a9dda379db7efab14e62f9fc850f89c70fc81bb4ba] <==
	E1101 10:10:22.795457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-533709&limit=500&resourceVersion=0\": dial tcp 192.168.61.122:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.122:53430->192.168.61.122:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 10:10:27.342226       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:10:27.342326       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.122"]
	E1101 10:10:27.342470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:10:27.401356       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:10:27.401428       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:10:27.401459       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:10:27.415998       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:10:27.418894       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:10:27.419490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:27.436277       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:10:27.436498       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:10:27.436664       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:10:27.436689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:10:27.440106       1 config.go:200] "Starting service config controller"
	I1101 10:10:27.440777       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:10:27.440662       1 config.go:309] "Starting node config controller"
	I1101 10:10:27.441009       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:10:27.441036       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:10:27.536824       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:10:27.536890       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 10:10:27.541223       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [8bbaa009a8d7c572ab9c6f67864a5b74d4937c9c0fdfb81ff3db36bd7b78f19e] <==
	I1101 10:09:13.085888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:09:13.187960       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:09:13.188011       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.61.122"]
	E1101 10:09:13.188167       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:09:13.397807       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:09:13.397959       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:09:13.398036       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:09:13.415213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:09:13.416168       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:09:13.416220       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:09:13.423300       1 config.go:200] "Starting service config controller"
	I1101 10:09:13.423455       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:09:13.423560       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:09:13.423579       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:09:13.423677       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:09:13.423695       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:09:13.425175       1 config.go:309] "Starting node config controller"
	I1101 10:09:13.425333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:09:13.425364       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:09:13.524490       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:09:13.524528       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:09:13.524591       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e204777a5b47ad5602b9943aa82ef3b3c9cbc9ffab40a8c53b196972ab1f8096] <==
	I1101 10:10:25.066799       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:10:27.301268       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:10:27.301571       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:27.308661       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:10:27.308823       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:10:27.308866       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:10:27.308901       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:10:27.310149       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:10:27.310179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:10:27.310193       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.310198       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.410305       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:10:27.410719       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:10:27.410736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [e362762826b71a934dbb5eea442d975cc05597b31ae86c9e7948f1898ab565fc] <==
	E1101 10:09:03.352345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:09:03.352382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:09:03.352416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:09:03.352448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:09:03.352482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:09:03.354729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:09:03.354783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:09:04.199027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:09:04.334482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:09:04.564213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:09:04.587016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:09:04.626562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:09:04.632916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:09:04.704204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 10:09:04.748893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:09:04.754046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:09:04.754642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 10:09:04.778263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1101 10:09:06.529846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:09:50.056501       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:09:50.056571       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:09:50.056614       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:09:50.056656       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:09:50.056838       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:09:50.056885       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.460843    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.461297    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:25 pause-533709 kubelet[3593]: E1101 10:10:25.461700    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:26 pause-533709 kubelet[3593]: E1101 10:10:26.467548    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:26 pause-533709 kubelet[3593]: E1101 10:10:26.467853    3593 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-533709\" not found" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.183162    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.411904    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-533709\" already exists" pod="kube-system/etcd-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.412194    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415638    3593 kubelet_node_status.go:124] "Node was previously registered" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415781    3593 kubelet_node_status.go:78] "Successfully registered node" node="pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.415984    3593 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.417840    3593 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.433187    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-533709\" already exists" pod="kube-system/kube-apiserver-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.433784    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.450622    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-533709\" already exists" pod="kube-system/kube-controller-manager-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: I1101 10:10:27.450829    3593 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-533709"
	Nov 01 10:10:27 pause-533709 kubelet[3593]: E1101 10:10:27.460577    3593 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-533709\" already exists" pod="kube-system/kube-scheduler-pause-533709"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.071466    3593 apiserver.go:52] "Watching apiserver"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.082360    3593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.128949    3593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c0c82af-9116-41ce-9b01-bb2802550969-lib-modules\") pod \"kube-proxy-mkdfj\" (UID: \"1c0c82af-9116-41ce-9b01-bb2802550969\") " pod="kube-system/kube-proxy-mkdfj"
	Nov 01 10:10:28 pause-533709 kubelet[3593]: I1101 10:10:28.130581    3593 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c0c82af-9116-41ce-9b01-bb2802550969-xtables-lock\") pod \"kube-proxy-mkdfj\" (UID: \"1c0c82af-9116-41ce-9b01-bb2802550969\") " pod="kube-system/kube-proxy-mkdfj"
	Nov 01 10:10:33 pause-533709 kubelet[3593]: E1101 10:10:33.303033    3593 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991833302600944  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:33 pause-533709 kubelet[3593]: E1101 10:10:33.303153    3593 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991833302600944  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:43 pause-533709 kubelet[3593]: E1101 10:10:43.304776    3593 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991843304523438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:10:43 pause-533709 kubelet[3593]: E1101 10:10:43.304797    3593 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991843304523438  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-533709 -n pause-533709
helpers_test.go:269: (dbg) Run:  kubectl --context pause-533709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.74s)

                                                
                                    

Test pass (289/343)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.06
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.46
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 92.47
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 416.33
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 28.55
36 TestAddons/parallel/RegistryCreds 0.67
38 TestAddons/parallel/InspektorGadget 5.3
39 TestAddons/parallel/MetricsServer 6.83
42 TestAddons/parallel/Headlamp 18.87
43 TestAddons/parallel/CloudSpanner 6.58
45 TestAddons/parallel/NvidiaDevicePlugin 6
48 TestAddons/StoppedEnableDisable 83.82
49 TestCertOptions 78.54
50 TestCertExpiration 306.01
52 TestForceSystemdFlag 84.97
53 TestForceSystemdEnv 61.28
58 TestErrorSpam/setup 39.02
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.66
61 TestErrorSpam/pause 1.59
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 89.14
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.01
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.56
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
75 TestFunctional/serial/CacheCmd/cache/add_local 1.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 39.96
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4.05
89 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DryRun 0.23
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.68
97 TestFunctional/parallel/ServiceCmdConnect 9.44
98 TestFunctional/parallel/AddonsCmd 0.17
101 TestFunctional/parallel/SSHCmd 0.31
102 TestFunctional/parallel/CpCmd 1.02
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.14
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
113 TestFunctional/parallel/License 0.27
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
117 TestFunctional/parallel/MountCmd/any-port 37.14
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
120 TestFunctional/parallel/ProfileCmd/profile_list 0.31
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
122 TestFunctional/parallel/MountCmd/specific-port 1.43
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.48
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
139 TestFunctional/parallel/ImageCommands/ImageBuild 2.89
140 TestFunctional/parallel/ImageCommands/Setup 0.41
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
148 TestFunctional/parallel/ServiceCmd/List 1.21
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.21
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 213.98
161 TestMultiControlPlane/serial/DeployApp 6.39
162 TestMultiControlPlane/serial/PingHostFromPods 1.38
163 TestMultiControlPlane/serial/AddWorkerNode 44.88
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.72
166 TestMultiControlPlane/serial/CopyFile 11.08
167 TestMultiControlPlane/serial/StopSecondaryNode 86.98
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 42.83
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 380.2
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.6
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
174 TestMultiControlPlane/serial/StopCluster 255.18
175 TestMultiControlPlane/serial/RestartCluster 98.46
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
177 TestMultiControlPlane/serial/AddSecondaryNode 77.36
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 79.11
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.76
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.49
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 83.25
215 TestMountStart/serial/StartWithMountFirst 21.02
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 23.74
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.31
221 TestMountStart/serial/Stop 1.38
222 TestMountStart/serial/RestartStopped 20.89
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 99.9
227 TestMultiNode/serial/DeployApp2Nodes 5.54
228 TestMultiNode/serial/PingHostFrom2Pods 0.91
229 TestMultiNode/serial/AddNode 42.77
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 6.1
233 TestMultiNode/serial/StopNode 2.24
234 TestMultiNode/serial/StartAfterStop 40.57
235 TestMultiNode/serial/RestartKeepsNodes 312.97
236 TestMultiNode/serial/DeleteNode 2.67
237 TestMultiNode/serial/StopMultiNode 178.21
238 TestMultiNode/serial/RestartMultiNode 96.93
239 TestMultiNode/serial/ValidateNameConflict 42.26
246 TestScheduledStopUnix 110.98
250 TestRunningBinaryUpgrade 166.58
252 TestKubernetesUpgrade 273.89
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 85.8
257 TestNoKubernetes/serial/StartWithStopK8s 30.04
265 TestStoppedBinaryUpgrade/Setup 0.59
266 TestStoppedBinaryUpgrade/Upgrade 109.53
267 TestNoKubernetes/serial/Start 48.2
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
269 TestNoKubernetes/serial/ProfileList 1.16
270 TestNoKubernetes/serial/Stop 1.48
271 TestNoKubernetes/serial/StartNoArgs 59.85
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
280 TestNetworkPlugins/group/false 5.73
284 TestISOImage/Setup 29.64
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
287 TestPause/serial/Start 101.5
289 TestISOImage/Binaries/crictl 0.2
290 TestISOImage/Binaries/curl 0.19
291 TestISOImage/Binaries/docker 0.21
292 TestISOImage/Binaries/git 0.23
293 TestISOImage/Binaries/iptables 0.22
294 TestISOImage/Binaries/podman 0.22
295 TestISOImage/Binaries/rsync 0.2
296 TestISOImage/Binaries/socat 0.22
297 TestISOImage/Binaries/wget 0.2
298 TestISOImage/Binaries/VBoxControl 0.21
299 TestISOImage/Binaries/VBoxService 0.22
302 TestStartStop/group/old-k8s-version/serial/FirstStart 90.01
304 TestStartStop/group/embed-certs/serial/FirstStart 64.85
306 TestStartStop/group/no-preload/serial/FirstStart 74.66
307 TestStartStop/group/embed-certs/serial/DeployApp 9.34
308 TestStartStop/group/old-k8s-version/serial/DeployApp 10.35
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
310 TestStartStop/group/embed-certs/serial/Stop 75.34
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
312 TestStartStop/group/old-k8s-version/serial/Stop 88.23
313 TestStartStop/group/no-preload/serial/DeployApp 11.3
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
315 TestStartStop/group/no-preload/serial/Stop 86.76
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
317 TestStartStop/group/embed-certs/serial/SecondStart 50.65
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
319 TestStartStop/group/old-k8s-version/serial/SecondStart 87.97
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.58
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/no-preload/serial/SecondStart 63.74
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
327 TestStartStop/group/embed-certs/serial/Pause 3.35
329 TestStartStop/group/newest-cni/serial/FirstStart 63.02
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
331 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
332 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
333 TestStartStop/group/old-k8s-version/serial/Pause 3.46
334 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
335 TestNetworkPlugins/group/auto/Start 52.96
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
339 TestStartStop/group/newest-cni/serial/Stop 10.35
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 86.75
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
344 TestStartStop/group/no-preload/serial/Pause 2.82
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
346 TestStartStop/group/newest-cni/serial/SecondStart 41.89
347 TestNetworkPlugins/group/flannel/Start 91.03
348 TestNetworkPlugins/group/auto/KubeletFlags 0.22
349 TestNetworkPlugins/group/auto/NetCatPod 10.29
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
353 TestStartStop/group/newest-cni/serial/Pause 3.99
354 TestNetworkPlugins/group/auto/DNS 0.23
355 TestNetworkPlugins/group/auto/Localhost 0.18
356 TestNetworkPlugins/group/auto/HairPin 0.18
357 TestNetworkPlugins/group/enable-default-cni/Start 85.78
358 TestNetworkPlugins/group/bridge/Start 104.28
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 59.94
361 TestNetworkPlugins/group/flannel/ControllerPod 6.01
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
363 TestNetworkPlugins/group/flannel/NetCatPod 13.35
364 TestNetworkPlugins/group/flannel/DNS 0.2
365 TestNetworkPlugins/group/flannel/Localhost 0.16
366 TestNetworkPlugins/group/flannel/HairPin 0.19
367 TestNetworkPlugins/group/calico/Start 73.29
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.01
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.82
377 TestNetworkPlugins/group/kindnet/Start 64.46
378 TestNetworkPlugins/group/custom-flannel/Start 96.74
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
380 TestNetworkPlugins/group/bridge/NetCatPod 12.32
381 TestNetworkPlugins/group/bridge/DNS 0.18
382 TestNetworkPlugins/group/bridge/Localhost 0.18
383 TestNetworkPlugins/group/bridge/HairPin 0.17
385 TestISOImage/PersistentMounts//data 0.19
386 TestISOImage/PersistentMounts//var/lib/docker 0.2
387 TestISOImage/PersistentMounts//var/lib/cni 0.21
388 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
389 TestISOImage/PersistentMounts//var/lib/minikube 0.2
390 TestISOImage/PersistentMounts//var/lib/toolbox 0.22
391 TestISOImage/PersistentMounts//var/lib/boot2docker 0.21
392 TestISOImage/eBPFSupport 0.19
393 TestNetworkPlugins/group/calico/ControllerPod 6.01
394 TestNetworkPlugins/group/calico/KubeletFlags 0.19
395 TestNetworkPlugins/group/calico/NetCatPod 30.25
396 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
397 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
398 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
399 TestNetworkPlugins/group/calico/DNS 0.17
400 TestNetworkPlugins/group/calico/Localhost 0.14
401 TestNetworkPlugins/group/calico/HairPin 0.15
402 TestNetworkPlugins/group/kindnet/DNS 0.15
403 TestNetworkPlugins/group/kindnet/Localhost 0.13
404 TestNetworkPlugins/group/kindnet/HairPin 0.14
405 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
406 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
407 TestNetworkPlugins/group/custom-flannel/DNS 0.16
408 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
409 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.062546735s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:44:32.078235  534515 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 08:44:32.078353  534515 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-147882
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-147882: exit status 85 (79.857502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:25.074756  534527 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:25.075061  534527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:25.075070  534527 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:25.075075  534527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:25.075325  534527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	W1101 08:44:25.075448  534527 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21833-530629/.minikube/config/config.json: open /home/jenkins/minikube-integration/21833-530629/.minikube/config/config.json: no such file or directory
	I1101 08:44:25.075974  534527 out.go:368] Setting JSON to true
	I1101 08:44:25.076977  534527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62787,"bootTime":1761923878,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:25.077069  534527 start.go:143] virtualization: kvm guest
	I1101 08:44:25.079316  534527 out.go:99] [download-only-147882] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1101 08:44:25.079471  534527 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:44:25.079515  534527 notify.go:221] Checking for updates...
	I1101 08:44:25.080785  534527 out.go:171] MINIKUBE_LOCATION=21833
	I1101 08:44:25.082342  534527 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:25.083737  534527 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:25.085016  534527 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:25.086316  534527 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:44:25.088643  534527 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:44:25.088949  534527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:44:25.124524  534527 out.go:99] Using the kvm2 driver based on user configuration
	I1101 08:44:25.124563  534527 start.go:309] selected driver: kvm2
	I1101 08:44:25.124572  534527 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:44:25.124943  534527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:44:25.125492  534527 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 08:44:25.125684  534527 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:44:25.125735  534527 cni.go:84] Creating CNI manager for ""
	I1101 08:44:25.125803  534527 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:44:25.125815  534527 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:44:25.125878  534527 start.go:353] cluster config:
	{Name:download-only-147882 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-147882 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:44:25.126140  534527 iso.go:125] acquiring lock: {Name:mk4a0ae0d13e232f8e381ad8e5059e42b27a0733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:44:25.127824  534527 out.go:99] Downloading VM boot image ...
	I1101 08:44:25.127871  534527 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21833-530629/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:44:28.483933  534527 out.go:99] Starting "download-only-147882" primary control-plane node in "download-only-147882" cluster
	I1101 08:44:28.483967  534527 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:44:28.506870  534527 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 08:44:28.506934  534527 cache.go:59] Caching tarball of preloaded images
	I1101 08:44:28.507145  534527 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:44:28.509034  534527 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 08:44:28.509068  534527 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 08:44:28.530064  534527 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 08:44:28.530205  534527 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-147882 host does not exist
	  To start a cluster, run: "minikube start -p download-only-147882"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-147882
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.45927463s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:44:36.937371  534515 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:44:36.937407  534515 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-530629/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664461
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664461: exit status 85 (79.039433ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147882 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ delete  │ -p download-only-147882                                                                                                                                                 │ download-only-147882 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │ 01 Nov 25 08:44 UTC │
	│ start   │ -o=json --download-only -p download-only-664461 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-664461 │ jenkins │ v1.37.0 │ 01 Nov 25 08:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:44:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:44:32.534568  534703 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:44:32.534817  534703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:32.534825  534703 out.go:374] Setting ErrFile to fd 2...
	I1101 08:44:32.534829  534703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:44:32.535031  534703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 08:44:32.535512  534703 out.go:368] Setting JSON to true
	I1101 08:44:32.536391  534703 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":62795,"bootTime":1761923878,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:44:32.536489  534703 start.go:143] virtualization: kvm guest
	I1101 08:44:32.538358  534703 out.go:99] [download-only-664461] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:44:32.538501  534703 notify.go:221] Checking for updates...
	I1101 08:44:32.539790  534703 out.go:171] MINIKUBE_LOCATION=21833
	I1101 08:44:32.541384  534703 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:44:32.542801  534703 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 08:44:32.544053  534703 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 08:44:32.545151  534703 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-664461 host does not exist
	  To start a cluster, run: "minikube start -p download-only-664461"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-664461
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:44:37.630338  534515 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-775538 --alsologtostderr --binary-mirror http://127.0.0.1:36997 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-775538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-775538
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (92.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-209356 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-209356 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m31.013864874s)
helpers_test.go:175: Cleaning up "offline-crio-209356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-209356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-209356: (1.455668366s)
--- PASS: TestOffline (92.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-994396
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-994396: exit status 85 (75.317735ms)

                                                
                                                
-- stdout --
	* Profile "addons-994396" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-994396"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-994396
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-994396: exit status 85 (75.888802ms)

                                                
                                                
-- stdout --
	* Profile "addons-994396" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-994396"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (416.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-994396 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m56.334241113s)
--- PASS: TestAddons/Setup (416.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-994396 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-994396 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (28.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-994396 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-994396 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4f6cc746-15b0-4ddb-9f8b-fa3a7e7133ea] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 28.009726385s
addons_test.go:694: (dbg) Run:  kubectl --context addons-994396 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-994396 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-994396 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (28.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.996419ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-994396
addons_test.go:332: (dbg) Run:  kubectl --context addons-994396 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-z8nnd" [c555360c-9a9f-4fdd-aa67-f18c3d2a4eb2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007045115s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.754779ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qpjgn" [ca6b12be-7c02-4334-aa28-6300877d8e89] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005515517s
addons_test.go:463: (dbg) Run:  kubectl --context addons-994396 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-994396 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-8674984b5f-rmpkd" [1900ec69-c511-434e-ab18-f132baa4233f] Pending
helpers_test.go:352: "headlamp-8674984b5f-rmpkd" [1900ec69-c511-434e-ab18-f132baa4233f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-8674984b5f-rmpkd" [1900ec69-c511-434e-ab18-f132baa4233f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.009564585s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-994396 addons disable headlamp --alsologtostderr -v=1: (5.882316916s)
--- PASS: TestAddons/parallel/Headlamp (18.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-k2ptg" [8660bc3a-2538-4262-93dd-c6a8c9bc6b55] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003027588s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bn97p" [8cc13452-31c6-46b5-8efb-e8b44ec63c27] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.087156198s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (83.82s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-994396
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-994396: (1m23.56579102s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-994396
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-994396
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-994396
--- PASS: TestAddons/StoppedEnableDisable (83.82s)

                                                
                                    
x
+
TestCertOptions (78.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-476227 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-476227 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m17.212023115s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-476227 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-476227 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-476227 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-476227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-476227
--- PASS: TestCertOptions (78.54s)

                                                
                                    
x
+
TestCertExpiration (306.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-734989 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-734989 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m22.893506219s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-734989 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-734989 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.156021468s)
helpers_test.go:175: Cleaning up "cert-expiration-734989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-734989
--- PASS: TestCertExpiration (306.01s)

                                                
                                    
x
+
TestForceSystemdFlag (84.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-360782 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-360782 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m23.747099577s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-360782 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-360782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-360782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-360782: (1.014805105s)
--- PASS: TestForceSystemdFlag (84.97s)

                                                
                                    
x
+
TestForceSystemdEnv (61.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-940638 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-940638 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.238914858s)
helpers_test.go:175: Cleaning up "force-systemd-env-940638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-940638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-940638: (1.040351707s)
--- PASS: TestForceSystemdEnv (61.28s)

                                                
                                    
x
+
TestErrorSpam/setup (39.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-839859 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-839859 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-839859 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-839859 --driver=kvm2  --container-runtime=crio: (39.015584326s)
--- PASS: TestErrorSpam/setup (39.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (89.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop
E1101 09:06:35.408009  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.414485  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.425890  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.447335  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.488887  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.570483  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:35.732093  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:36.053958  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:36.696178  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:37.977921  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:40.540926  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:45.662564  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:06:55.904372  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:07:16.386540  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop: (1m26.80504978s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop
E1101 09:07:57.348614  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop: (1.194891626s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-839859 --log_dir /tmp/nospam-839859 stop: (1.141016876s)
--- PASS: TestErrorSpam/stop (89.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21833-530629/.minikube/files/etc/test/nested/copy/534515/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1101 09:09:19.270645  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-854568 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.009672902s)
--- PASS: TestFunctional/serial/StartWithProxy (82.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.56s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:09:21.208096  534515 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-854568 --alsologtostderr -v=8: (41.555304765s)
functional_test.go:678: soft start took 41.5561927s for "functional-854568" cluster.
I1101 09:10:02.763806  534515 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (41.56s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-854568 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:3.1: (1.15874312s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:3.3: (1.169688428s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 cache add registry.k8s.io/pause:latest: (1.15662748s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-854568 /tmp/TestFunctionalserialCacheCmdcacheadd_local1889218673/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache add minikube-local-cache-test:functional-854568
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache delete minikube-local-cache-test:functional-854568
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-854568
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (181.380815ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 cache reload: (1.010143456s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 kubectl -- --context functional-854568 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-854568 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-854568 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.962319845s)
functional_test.go:776: restart took 39.962462355s for "functional-854568" cluster.
I1101 09:10:49.814953  534515 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (39.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-854568 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 logs: (1.461098176s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 logs --file /tmp/TestFunctionalserialLogsFileCmd1100128771/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 logs --file /tmp/TestFunctionalserialLogsFileCmd1100128771/001/logs.txt: (1.481366765s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-854568 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-854568
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-854568: exit status 115 (246.626756ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.129:31176 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-854568 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 config get cpus: exit status 14 (66.711547ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 config get cpus: exit status 14 (66.41618ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.273436ms)

                                                
                                                
-- stdout --
	* [functional-854568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:11:38.118425  546358 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:11:38.118532  546358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.118544  546358 out.go:374] Setting ErrFile to fd 2...
	I1101 09:11:38.118551  546358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:38.118788  546358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:11:38.119345  546358 out.go:368] Setting JSON to false
	I1101 09:11:38.120346  546358 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":64420,"bootTime":1761923878,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:11:38.120447  546358 start.go:143] virtualization: kvm guest
	I1101 09:11:38.122519  546358 out.go:179] * [functional-854568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:11:38.123811  546358 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:11:38.123834  546358 notify.go:221] Checking for updates...
	I1101 09:11:38.126136  546358 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:11:38.127285  546358 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 09:11:38.128432  546358 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 09:11:38.129719  546358 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:11:38.131009  546358 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:11:38.132963  546358 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:11:38.133606  546358 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:11:38.166703  546358 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:11:38.167769  546358 start.go:309] selected driver: kvm2
	I1101 09:11:38.167781  546358 start.go:930] validating driver "kvm2" against &{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:38.167888  546358 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:11:38.169824  546358 out.go:203] 
	W1101 09:11:38.171019  546358 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:11:38.172136  546358 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-854568 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (124.733149ms)

                                                
                                                
-- stdout --
	* [functional-854568] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:11:37.313650  546293 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:11:37.313769  546293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:37.313779  546293 out.go:374] Setting ErrFile to fd 2...
	I1101 09:11:37.313782  546293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:11:37.314129  546293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:11:37.314552  546293 out.go:368] Setting JSON to false
	I1101 09:11:37.315421  546293 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":64419,"bootTime":1761923878,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:11:37.315520  546293 start.go:143] virtualization: kvm guest
	I1101 09:11:37.317513  546293 out.go:179] * [functional-854568] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:11:37.319383  546293 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:11:37.319433  546293 notify.go:221] Checking for updates...
	I1101 09:11:37.321909  546293 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:11:37.323212  546293 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 09:11:37.328075  546293 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 09:11:37.329405  546293 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:11:37.330555  546293 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:11:37.332155  546293 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:11:37.332614  546293 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:11:37.363704  546293 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 09:11:37.364878  546293 start.go:309] selected driver: kvm2
	I1101 09:11:37.364892  546293 start.go:930] validating driver "kvm2" against &{Name:functional-854568 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-854568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:11:37.365033  546293 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:11:37.367058  546293 out.go:203] 
	W1101 09:11:37.368115  546293 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:11:37.369175  546293 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-854568 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-854568 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8fqgj" [645dc979-5e33-4017-b9c6-399736482d7d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8fqgj" [645dc979-5e33-4017-b9c6-399736482d7d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003815671s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.129:31725
functional_test.go:1680: http://192.168.39.129:31725: success! body:
Request served by hello-node-connect-7d85dfc575-8fqgj

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.129:31725
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh -n functional-854568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cp functional-854568:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1080454234/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh -n functional-854568 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh -n functional-854568 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/534515/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /etc/test/nested/copy/534515/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/534515.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /etc/ssl/certs/534515.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/534515.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /usr/share/ca-certificates/534515.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5345152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /etc/ssl/certs/5345152.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5345152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /usr/share/ca-certificates/5345152.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-854568 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "sudo systemctl is-active docker": exit status 1 (164.260136ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "sudo systemctl is-active containerd": exit status 1 (166.20371ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (37.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdany-port2758721856/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761988257148333100" to /tmp/TestFunctionalparallelMountCmdany-port2758721856/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761988257148333100" to /tmp/TestFunctionalparallelMountCmdany-port2758721856/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761988257148333100" to /tmp/TestFunctionalparallelMountCmdany-port2758721856/001/test-1761988257148333100
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (174.594442ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:10:57.323366  534515 retry.go:31] will retry after 565.734743ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:10 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:10 test-1761988257148333100
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh cat /mount-9p/test-1761988257148333100
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-854568 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [249b33c1-c442-4698-8c37-9d6af53ed2fc] Pending
helpers_test.go:352: "busybox-mount" [249b33c1-c442-4698-8c37-9d6af53ed2fc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [249b33c1-c442-4698-8c37-9d6af53ed2fc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [249b33c1-c442-4698-8c37-9d6af53ed2fc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 35.004231713s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-854568 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdany-port2758721856/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (37.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "247.247313ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.802007ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "247.831015ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.252779ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdspecific-port4096350073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (163.174847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:11:34.448639  534515 retry.go:31] will retry after 570.54247ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdspecific-port4096350073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E1101 09:11:35.403935  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "sudo umount -f /mount-9p": exit status 1 (163.293762ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-854568 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdspecific-port4096350073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T" /mount1: exit status 1 (176.554066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:11:35.889073  534515 retry.go:31] will retry after 509.49705ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-854568 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-854568 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3330862646/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-854568 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-854568
localhost/kicbase/echo-server:functional-854568
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-854568 image ls --format short --alsologtostderr:
I1101 09:16:47.588324  547755 out.go:360] Setting OutFile to fd 1 ...
I1101 09:16:47.588568  547755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:47.588576  547755 out.go:374] Setting ErrFile to fd 2...
I1101 09:16:47.588580  547755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:47.588761  547755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:16:47.590395  547755 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:47.590569  547755 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:47.592869  547755 ssh_runner.go:195] Run: systemctl --version
I1101 09:16:47.595448  547755 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:47.595911  547755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:16:47.595941  547755 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:47.596108  547755 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:16:47.680142  547755 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-854568 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-854568  │ 3e35a92e519b8 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-854568  │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-854568  │ 6ff121faa81a0 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-854568 image ls --format table --alsologtostderr:
I1101 09:16:51.070391  547837 out.go:360] Setting OutFile to fd 1 ...
I1101 09:16:51.070637  547837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:51.070645  547837 out.go:374] Setting ErrFile to fd 2...
I1101 09:16:51.070649  547837 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:51.070858  547837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:16:51.071449  547837 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:51.071545  547837 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:51.073621  547837 ssh_runner.go:195] Run: systemctl --version
I1101 09:16:51.075922  547837 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:51.076387  547837 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:16:51.076421  547837 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:51.076592  547837 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:16:51.159384  547837 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-854568 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"ae4eea9b320ed030ca81f877e14347c15f12e37bb674b98a56521be8041d48ae","repoDigests":["docker.io/library/3f43cb066827fe12ec8aa6233751c188ba264d5967c36764c914a6d6ce875753-tmp@sha256:855a0e1d20060d58e8a18555d1d92131bb8757f6b59bbb759eb8fbd1e97dc807"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631
262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3e35a92e519b8ab85e917756ea66066e4e8655bfcd255ed643b9e2e453a66b9a","repoDigests":["localhost/minikube-local-cache-test@sha256:e5439352db50efc5971f0a9ab03417ffc1d146ec7e46969569fe381662a20a69"],"repoTags":["localhost/minikube-local-cache-test:functional-8545
68"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5
bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edca
c8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-854568"],"size":"4944818"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],
"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"6ff121faa81a099526d348af3fa4036473f43e7f363e30eaa8bba5ed52588d96","repoDigests":["localhost/my-image@sha256:f570906fc07d8e822d9a499cd8667886a8e8fe19330b81c0b8d32c4495f586b0"],"repoTags":["localhost/my-image:functional-854568"],"size":"1468600"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbad
c97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-854568 image ls --format json --alsologtostderr:
I1101 09:16:50.876040  547826 out.go:360] Setting OutFile to fd 1 ...
I1101 09:16:50.876318  547826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:50.876329  547826 out.go:374] Setting ErrFile to fd 2...
I1101 09:16:50.876335  547826 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:50.876583  547826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:16:50.877168  547826 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:50.877315  547826 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:50.879632  547826 ssh_runner.go:195] Run: systemctl --version
I1101 09:16:50.882210  547826 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:50.882648  547826 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:16:50.882682  547826 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:50.882821  547826 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:16:50.964295  547826 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-854568 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-854568
size: "4944818"
- id: 3e35a92e519b8ab85e917756ea66066e4e8655bfcd255ed643b9e2e453a66b9a
repoDigests:
- localhost/minikube-local-cache-test@sha256:e5439352db50efc5971f0a9ab03417ffc1d146ec7e46969569fe381662a20a69
repoTags:
- localhost/minikube-local-cache-test:functional-854568
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-854568 image ls --format yaml --alsologtostderr:
I1101 09:16:47.788815  547766 out.go:360] Setting OutFile to fd 1 ...
I1101 09:16:47.789110  547766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:47.789121  547766 out.go:374] Setting ErrFile to fd 2...
I1101 09:16:47.789124  547766 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:47.789334  547766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:16:47.789966  547766 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:47.790069  547766 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:47.792196  547766 ssh_runner.go:195] Run: systemctl --version
I1101 09:16:47.794625  547766 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:47.795009  547766 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:16:47.795035  547766 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:47.795156  547766 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:16:47.877710  547766 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-854568 ssh pgrep buildkitd: exit status 1 (161.828068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr: (2.527125976s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ae4eea9b320
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-854568
--> 6ff121faa81
Successfully tagged localhost/my-image:functional-854568
6ff121faa81a099526d348af3fa4036473f43e7f363e30eaa8bba5ed52588d96
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-854568 image build -t localhost/my-image:functional-854568 testdata/build --alsologtostderr:
I1101 09:16:48.148010  547804 out.go:360] Setting OutFile to fd 1 ...
I1101 09:16:48.148269  547804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:48.148278  547804 out.go:374] Setting ErrFile to fd 2...
I1101 09:16:48.148282  547804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:16:48.148497  547804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
I1101 09:16:48.149073  547804 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:48.149775  547804 config.go:182] Loaded profile config "functional-854568": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:16:48.152253  547804 ssh_runner.go:195] Run: systemctl --version
I1101 09:16:48.154727  547804 main.go:143] libmachine: domain functional-854568 has defined MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:48.155071  547804 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:ec:ba", ip: ""} in network mk-functional-854568: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:15 +0000 UTC Type:0 Mac:52:54:00:cb:ec:ba Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:functional-854568 Clientid:01:52:54:00:cb:ec:ba}
I1101 09:16:48.155098  547804 main.go:143] libmachine: domain functional-854568 has defined IP address 192.168.39.129 and MAC address 52:54:00:cb:ec:ba in network mk-functional-854568
I1101 09:16:48.155217  547804 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/functional-854568/id_rsa Username:docker}
I1101 09:16:48.239146  547804 build_images.go:162] Building image from path: /tmp/build.2151474740.tar
I1101 09:16:48.239234  547804 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:16:48.252422  547804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2151474740.tar
I1101 09:16:48.257713  547804 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2151474740.tar: stat -c "%s %y" /var/lib/minikube/build/build.2151474740.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2151474740.tar': No such file or directory
I1101 09:16:48.257749  547804 ssh_runner.go:362] scp /tmp/build.2151474740.tar --> /var/lib/minikube/build/build.2151474740.tar (3072 bytes)
I1101 09:16:48.292320  547804 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2151474740
I1101 09:16:48.304957  547804 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2151474740 -xf /var/lib/minikube/build/build.2151474740.tar
I1101 09:16:48.317008  547804 crio.go:315] Building image: /var/lib/minikube/build/build.2151474740
I1101 09:16:48.317114  547804 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-854568 /var/lib/minikube/build/build.2151474740 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 09:16:50.574827  547804 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-854568 /var/lib/minikube/build/build.2151474740 --cgroup-manager=cgroupfs: (2.257677883s)
I1101 09:16:50.574954  547804 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2151474740
I1101 09:16:50.595636  547804 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2151474740.tar
I1101 09:16:50.608932  547804 build_images.go:218] Built localhost/my-image:functional-854568 from /tmp/build.2151474740.tar
I1101 09:16:50.608971  547804 build_images.go:134] succeeded building to: functional-854568
I1101 09:16:50.608993  547804 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-854568
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr: (1.012001441s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-854568
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image load --daemon kicbase/echo-server:functional-854568 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image save kicbase/echo-server:functional-854568 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image rm kicbase/echo-server:functional-854568 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-854568
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 image save --daemon kicbase/echo-server:functional-854568 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-854568
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 service list: (1.205274021s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-854568 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-854568 service list -o json: (1.206349747s)
functional_test.go:1504: Took "1.206459967s" to run "out/minikube-linux-amd64 -p functional-854568 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-854568
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-854568
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-854568
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 09:21:35.403365  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:22:58.474923  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m33.384731602s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (213.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 kubectl -- rollout status deployment/busybox: (4.022630946s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-2mlgt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-kj5x9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-mnhrp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-2mlgt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-kj5x9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-mnhrp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-2mlgt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-kj5x9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-mnhrp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-2mlgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-2mlgt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-kj5x9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-kj5x9 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-mnhrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 kubectl -- exec busybox-7b57f96db7-mnhrp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 node add --alsologtostderr -v 5: (44.14778652s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-162787 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp testdata/cp-test.txt ha-162787:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3399555567/001/cp-test_ha-162787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787:/home/docker/cp-test.txt ha-162787-m02:/home/docker/cp-test_ha-162787_ha-162787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test_ha-162787_ha-162787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787:/home/docker/cp-test.txt ha-162787-m03:/home/docker/cp-test_ha-162787_ha-162787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test_ha-162787_ha-162787-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787:/home/docker/cp-test.txt ha-162787-m04:/home/docker/cp-test_ha-162787_ha-162787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test_ha-162787_ha-162787-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp testdata/cp-test.txt ha-162787-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3399555567/001/cp-test_ha-162787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m02:/home/docker/cp-test.txt ha-162787:/home/docker/cp-test_ha-162787-m02_ha-162787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test_ha-162787-m02_ha-162787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m02:/home/docker/cp-test.txt ha-162787-m03:/home/docker/cp-test_ha-162787-m02_ha-162787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test_ha-162787-m02_ha-162787-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m02:/home/docker/cp-test.txt ha-162787-m04:/home/docker/cp-test_ha-162787-m02_ha-162787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test_ha-162787-m02_ha-162787-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp testdata/cp-test.txt ha-162787-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3399555567/001/cp-test_ha-162787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m03:/home/docker/cp-test.txt ha-162787:/home/docker/cp-test_ha-162787-m03_ha-162787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test_ha-162787-m03_ha-162787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m03:/home/docker/cp-test.txt ha-162787-m02:/home/docker/cp-test_ha-162787-m03_ha-162787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test_ha-162787-m03_ha-162787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m03:/home/docker/cp-test.txt ha-162787-m04:/home/docker/cp-test_ha-162787-m03_ha-162787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test_ha-162787-m03_ha-162787-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp testdata/cp-test.txt ha-162787-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3399555567/001/cp-test_ha-162787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m04:/home/docker/cp-test.txt ha-162787:/home/docker/cp-test_ha-162787-m04_ha-162787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787 "sudo cat /home/docker/cp-test_ha-162787-m04_ha-162787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m04:/home/docker/cp-test.txt ha-162787-m02:/home/docker/cp-test_ha-162787-m04_ha-162787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m02 "sudo cat /home/docker/cp-test_ha-162787-m04_ha-162787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 cp ha-162787-m04:/home/docker/cp-test.txt ha-162787-m03:/home/docker/cp-test_ha-162787-m04_ha-162787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 ssh -n ha-162787-m03 "sudo cat /home/docker/cp-test_ha-162787-m04_ha-162787-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node stop m02 --alsologtostderr -v 5
E1101 09:25:56.882010  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:56.888375  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:56.899770  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:56.921203  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:56.962678  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:57.044162  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:57.205698  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:57.527482  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:58.169765  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:25:59.451408  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:02.013966  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:07.135696  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:17.377654  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:35.403040  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:37.859051  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 node stop m02 --alsologtostderr -v 5: (1m26.450376415s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5: exit status 7 (525.014938ms)

                                                
                                                
-- stdout --
	ha-162787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162787-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162787-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162787-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:27:17.518450  552471 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:27:17.518600  552471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:17.518614  552471 out.go:374] Setting ErrFile to fd 2...
	I1101 09:27:17.518621  552471 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:17.518849  552471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:27:17.519056  552471 out.go:368] Setting JSON to false
	I1101 09:27:17.519086  552471 mustload.go:66] Loading cluster: ha-162787
	I1101 09:27:17.519205  552471 notify.go:221] Checking for updates...
	I1101 09:27:17.519513  552471 config.go:182] Loaded profile config "ha-162787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:17.519533  552471 status.go:174] checking status of ha-162787 ...
	I1101 09:27:17.522144  552471 status.go:371] ha-162787 host status = "Running" (err=<nil>)
	I1101 09:27:17.522164  552471 host.go:66] Checking if "ha-162787" exists ...
	I1101 09:27:17.525447  552471 main.go:143] libmachine: domain ha-162787 has defined MAC address 52:54:00:61:24:dd in network mk-ha-162787
	I1101 09:27:17.526217  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:24:dd", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:21:28 +0000 UTC Type:0 Mac:52:54:00:61:24:dd Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-162787 Clientid:01:52:54:00:61:24:dd}
	I1101 09:27:17.526252  552471 main.go:143] libmachine: domain ha-162787 has defined IP address 192.168.39.114 and MAC address 52:54:00:61:24:dd in network mk-ha-162787
	I1101 09:27:17.526400  552471 host.go:66] Checking if "ha-162787" exists ...
	I1101 09:27:17.526647  552471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:27:17.528968  552471 main.go:143] libmachine: domain ha-162787 has defined MAC address 52:54:00:61:24:dd in network mk-ha-162787
	I1101 09:27:17.529319  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:61:24:dd", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:21:28 +0000 UTC Type:0 Mac:52:54:00:61:24:dd Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-162787 Clientid:01:52:54:00:61:24:dd}
	I1101 09:27:17.529359  552471 main.go:143] libmachine: domain ha-162787 has defined IP address 192.168.39.114 and MAC address 52:54:00:61:24:dd in network mk-ha-162787
	I1101 09:27:17.529514  552471 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/ha-162787/id_rsa Username:docker}
	I1101 09:27:17.618422  552471 ssh_runner.go:195] Run: systemctl --version
	I1101 09:27:17.625480  552471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:27:17.645584  552471 kubeconfig.go:125] found "ha-162787" server: "https://192.168.39.254:8443"
	I1101 09:27:17.645643  552471 api_server.go:166] Checking apiserver status ...
	I1101 09:27:17.645695  552471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:27:17.669011  552471 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W1101 09:27:17.680667  552471 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:27:17.680729  552471 ssh_runner.go:195] Run: ls
	I1101 09:27:17.686886  552471 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 09:27:17.693836  552471 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 09:27:17.693862  552471 status.go:463] ha-162787 apiserver status = Running (err=<nil>)
	I1101 09:27:17.693874  552471 status.go:176] ha-162787 status: &{Name:ha-162787 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:27:17.693890  552471 status.go:174] checking status of ha-162787-m02 ...
	I1101 09:27:17.695622  552471 status.go:371] ha-162787-m02 host status = "Stopped" (err=<nil>)
	I1101 09:27:17.695647  552471 status.go:384] host is not running, skipping remaining checks
	I1101 09:27:17.695653  552471 status.go:176] ha-162787-m02 status: &{Name:ha-162787-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:27:17.695667  552471 status.go:174] checking status of ha-162787-m03 ...
	I1101 09:27:17.697071  552471 status.go:371] ha-162787-m03 host status = "Running" (err=<nil>)
	I1101 09:27:17.697091  552471 host.go:66] Checking if "ha-162787-m03" exists ...
	I1101 09:27:17.700034  552471 main.go:143] libmachine: domain ha-162787-m03 has defined MAC address 52:54:00:cc:d5:af in network mk-ha-162787
	I1101 09:27:17.700468  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:d5:af", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:39 +0000 UTC Type:0 Mac:52:54:00:cc:d5:af Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-162787-m03 Clientid:01:52:54:00:cc:d5:af}
	I1101 09:27:17.700489  552471 main.go:143] libmachine: domain ha-162787-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:d5:af in network mk-ha-162787
	I1101 09:27:17.700621  552471 host.go:66] Checking if "ha-162787-m03" exists ...
	I1101 09:27:17.700818  552471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:27:17.703185  552471 main.go:143] libmachine: domain ha-162787-m03 has defined MAC address 52:54:00:cc:d5:af in network mk-ha-162787
	I1101 09:27:17.703614  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:d5:af", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:39 +0000 UTC Type:0 Mac:52:54:00:cc:d5:af Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-162787-m03 Clientid:01:52:54:00:cc:d5:af}
	I1101 09:27:17.703633  552471 main.go:143] libmachine: domain ha-162787-m03 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:d5:af in network mk-ha-162787
	I1101 09:27:17.703762  552471 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/ha-162787-m03/id_rsa Username:docker}
	I1101 09:27:17.794062  552471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:27:17.816000  552471 kubeconfig.go:125] found "ha-162787" server: "https://192.168.39.254:8443"
	I1101 09:27:17.816031  552471 api_server.go:166] Checking apiserver status ...
	I1101 09:27:17.816073  552471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:27:17.837708  552471 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1803/cgroup
	W1101 09:27:17.851072  552471 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1803/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:27:17.851138  552471 ssh_runner.go:195] Run: ls
	I1101 09:27:17.856932  552471 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 09:27:17.862475  552471 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 09:27:17.862502  552471 status.go:463] ha-162787-m03 apiserver status = Running (err=<nil>)
	I1101 09:27:17.862512  552471 status.go:176] ha-162787-m03 status: &{Name:ha-162787-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:27:17.862530  552471 status.go:174] checking status of ha-162787-m04 ...
	I1101 09:27:17.864034  552471 status.go:371] ha-162787-m04 host status = "Running" (err=<nil>)
	I1101 09:27:17.864061  552471 host.go:66] Checking if "ha-162787-m04" exists ...
	I1101 09:27:17.866588  552471 main.go:143] libmachine: domain ha-162787-m04 has defined MAC address 52:54:00:46:ce:11 in network mk-ha-162787
	I1101 09:27:17.867037  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:ce:11", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:25:11 +0000 UTC Type:0 Mac:52:54:00:46:ce:11 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-162787-m04 Clientid:01:52:54:00:46:ce:11}
	I1101 09:27:17.867064  552471 main.go:143] libmachine: domain ha-162787-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:46:ce:11 in network mk-ha-162787
	I1101 09:27:17.867189  552471 host.go:66] Checking if "ha-162787-m04" exists ...
	I1101 09:27:17.867383  552471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:27:17.869210  552471 main.go:143] libmachine: domain ha-162787-m04 has defined MAC address 52:54:00:46:ce:11 in network mk-ha-162787
	I1101 09:27:17.869534  552471 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:46:ce:11", ip: ""} in network mk-ha-162787: {Iface:virbr1 ExpiryTime:2025-11-01 10:25:11 +0000 UTC Type:0 Mac:52:54:00:46:ce:11 Iaid: IPaddr:192.168.39.173 Prefix:24 Hostname:ha-162787-m04 Clientid:01:52:54:00:46:ce:11}
	I1101 09:27:17.869563  552471 main.go:143] libmachine: domain ha-162787-m04 has defined IP address 192.168.39.173 and MAC address 52:54:00:46:ce:11 in network mk-ha-162787
	I1101 09:27:17.869681  552471 sshutil.go:53] new ssh client: &{IP:192.168.39.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/ha-162787-m04/id_rsa Username:docker}
	I1101 09:27:17.959572  552471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:27:17.979150  552471 status.go:176] ha-162787-m04 status: &{Name:ha-162787-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node start m02 --alsologtostderr -v 5
E1101 09:27:18.821324  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 node start m02 --alsologtostderr -v 5: (41.789643063s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 stop --alsologtostderr -v 5
E1101 09:28:40.743822  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:30:56.881661  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:31:24.585634  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:31:35.407054  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 stop --alsologtostderr -v 5: (4m8.349355621s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 start --wait true --alsologtostderr -v 5: (2m11.699537406s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 node delete m03 --alsologtostderr -v 5: (17.907276732s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (255.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 stop --alsologtostderr -v 5
E1101 09:35:56.881649  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:35.404362  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 stop --alsologtostderr -v 5: (4m15.109064508s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5: exit status 7 (71.272112ms)

                                                
                                                
-- stdout --
	ha-162787
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162787-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162787-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:38:56.728385  555816 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:38:56.728644  555816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:56.728653  555816 out.go:374] Setting ErrFile to fd 2...
	I1101 09:38:56.728658  555816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:56.728888  555816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:38:56.729094  555816 out.go:368] Setting JSON to false
	I1101 09:38:56.729122  555816 mustload.go:66] Loading cluster: ha-162787
	I1101 09:38:56.729215  555816 notify.go:221] Checking for updates...
	I1101 09:38:56.729485  555816 config.go:182] Loaded profile config "ha-162787": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:56.729498  555816 status.go:174] checking status of ha-162787 ...
	I1101 09:38:56.731768  555816 status.go:371] ha-162787 host status = "Stopped" (err=<nil>)
	I1101 09:38:56.731784  555816 status.go:384] host is not running, skipping remaining checks
	I1101 09:38:56.731790  555816 status.go:176] ha-162787 status: &{Name:ha-162787 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:38:56.731808  555816 status.go:174] checking status of ha-162787-m02 ...
	I1101 09:38:56.733512  555816 status.go:371] ha-162787-m02 host status = "Stopped" (err=<nil>)
	I1101 09:38:56.733529  555816 status.go:384] host is not running, skipping remaining checks
	I1101 09:38:56.733533  555816 status.go:176] ha-162787-m02 status: &{Name:ha-162787-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:38:56.733547  555816 status.go:174] checking status of ha-162787-m04 ...
	I1101 09:38:56.734907  555816 status.go:371] ha-162787-m04 host status = "Stopped" (err=<nil>)
	I1101 09:38:56.734921  555816 status.go:384] host is not running, skipping remaining checks
	I1101 09:38:56.734926  555816 status.go:176] ha-162787-m04 status: &{Name:ha-162787-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (255.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 09:39:38.476458  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m37.805289336s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 node add --control-plane --alsologtostderr -v 5
E1101 09:40:56.881796  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:41:35.403398  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-162787 node add --control-plane --alsologtostderr -v 5: (1m16.640529774s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-162787 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-154772 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1101 09:42:19.948818  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-154772 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m19.113848217s)
--- PASS: TestJSONOutput/start/Command (79.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-154772 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-154772 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.49s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-154772 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-154772 --output=json --user=testUser: (7.485245634s)
--- PASS: TestJSONOutput/stop/Command (7.49s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-487298 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-487298 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (84.100201ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18e22e6b-18e4-46d0-ad27-a15840331e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-487298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9180ef07-8b10-43d3-a5ba-9dbfe61e7f2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21833"}}
	{"specversion":"1.0","id":"8d6db803-81c6-4395-af46-6c5da639d497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14f135d9-4af2-4260-8199-725c50c6f12d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig"}}
	{"specversion":"1.0","id":"fc83793f-e539-4082-be05-3581cd69d15e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube"}}
	{"specversion":"1.0","id":"d4473cb0-79a3-4d9e-9b08-622c3d3b4505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b27b6f79-9a2b-4829-9965-874e1086fd1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2bd1bd8-2b5e-4499-91cf-99f00a35b9ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-487298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-487298
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (83.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-888402 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-888402 --driver=kvm2  --container-runtime=crio: (40.016321573s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-891162 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-891162 --driver=kvm2  --container-runtime=crio: (40.509419713s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-888402
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-891162
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-891162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-891162
helpers_test.go:175: Cleaning up "first-888402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-888402
--- PASS: TestMinikubeProfile (83.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-090353 --memory=3072 --mount-string /tmp/TestMountStartserial1651809968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-090353 --memory=3072 --mount-string /tmp/TestMountStartserial1651809968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.015639298s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-090353 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-090353 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-110182 --memory=3072 --mount-string /tmp/TestMountStartserial1651809968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-110182 --memory=3072 --mount-string /tmp/TestMountStartserial1651809968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.736664133s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-090353 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-110182
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-110182: (1.375297171s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-110182
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-110182: (19.889000021s)
E1101 09:45:56.882094  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/RestartStopped (20.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-110182 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119623 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 09:46:35.403939  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119623 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.557469522s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-119623 -- rollout status deployment/busybox: (3.846973193s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-ngwjf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-tf6vt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-ngwjf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-tf6vt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-ngwjf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-tf6vt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-ngwjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-ngwjf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-tf6vt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-119623 -- exec busybox-7b57f96db7-tf6vt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-119623 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-119623 -v=5 --alsologtostderr: (42.308592365s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-119623 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp testdata/cp-test.txt multinode-119623:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile762074052/001/cp-test_multinode-119623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623:/home/docker/cp-test.txt multinode-119623-m02:/home/docker/cp-test_multinode-119623_multinode-119623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test_multinode-119623_multinode-119623-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623:/home/docker/cp-test.txt multinode-119623-m03:/home/docker/cp-test_multinode-119623_multinode-119623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test_multinode-119623_multinode-119623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp testdata/cp-test.txt multinode-119623-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile762074052/001/cp-test_multinode-119623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m02:/home/docker/cp-test.txt multinode-119623:/home/docker/cp-test_multinode-119623-m02_multinode-119623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test_multinode-119623-m02_multinode-119623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m02:/home/docker/cp-test.txt multinode-119623-m03:/home/docker/cp-test_multinode-119623-m02_multinode-119623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test_multinode-119623-m02_multinode-119623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp testdata/cp-test.txt multinode-119623-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile762074052/001/cp-test_multinode-119623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m03:/home/docker/cp-test.txt multinode-119623:/home/docker/cp-test_multinode-119623-m03_multinode-119623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623 "sudo cat /home/docker/cp-test_multinode-119623-m03_multinode-119623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 cp multinode-119623-m03:/home/docker/cp-test.txt multinode-119623-m02:/home/docker/cp-test_multinode-119623-m03_multinode-119623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 ssh -n multinode-119623-m02 "sudo cat /home/docker/cp-test_multinode-119623-m03_multinode-119623-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-119623 node stop m03: (1.575543807s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119623 status: exit status 7 (333.470198ms)

                                                
                                                
-- stdout --
	multinode-119623
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119623-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119623-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr: exit status 7 (332.125848ms)

                                                
                                                
-- stdout --
	multinode-119623
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-119623-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-119623-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:48:36.748391  561453 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:48:36.748660  561453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:48:36.748671  561453 out.go:374] Setting ErrFile to fd 2...
	I1101 09:48:36.748675  561453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:48:36.748967  561453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:48:36.749188  561453 out.go:368] Setting JSON to false
	I1101 09:48:36.749219  561453 mustload.go:66] Loading cluster: multinode-119623
	I1101 09:48:36.749337  561453 notify.go:221] Checking for updates...
	I1101 09:48:36.749700  561453 config.go:182] Loaded profile config "multinode-119623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:48:36.749721  561453 status.go:174] checking status of multinode-119623 ...
	I1101 09:48:36.751938  561453 status.go:371] multinode-119623 host status = "Running" (err=<nil>)
	I1101 09:48:36.751959  561453 host.go:66] Checking if "multinode-119623" exists ...
	I1101 09:48:36.754418  561453 main.go:143] libmachine: domain multinode-119623 has defined MAC address 52:54:00:4f:f2:1a in network mk-multinode-119623
	I1101 09:48:36.754868  561453 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:f2:1a", ip: ""} in network mk-multinode-119623: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:15 +0000 UTC Type:0 Mac:52:54:00:4f:f2:1a Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-119623 Clientid:01:52:54:00:4f:f2:1a}
	I1101 09:48:36.754921  561453 main.go:143] libmachine: domain multinode-119623 has defined IP address 192.168.39.191 and MAC address 52:54:00:4f:f2:1a in network mk-multinode-119623
	I1101 09:48:36.755060  561453 host.go:66] Checking if "multinode-119623" exists ...
	I1101 09:48:36.755329  561453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:48:36.757875  561453 main.go:143] libmachine: domain multinode-119623 has defined MAC address 52:54:00:4f:f2:1a in network mk-multinode-119623
	I1101 09:48:36.758379  561453 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4f:f2:1a", ip: ""} in network mk-multinode-119623: {Iface:virbr1 ExpiryTime:2025-11-01 10:46:15 +0000 UTC Type:0 Mac:52:54:00:4f:f2:1a Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:multinode-119623 Clientid:01:52:54:00:4f:f2:1a}
	I1101 09:48:36.758416  561453 main.go:143] libmachine: domain multinode-119623 has defined IP address 192.168.39.191 and MAC address 52:54:00:4f:f2:1a in network mk-multinode-119623
	I1101 09:48:36.758624  561453 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/multinode-119623/id_rsa Username:docker}
	I1101 09:48:36.838263  561453 ssh_runner.go:195] Run: systemctl --version
	I1101 09:48:36.844745  561453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:48:36.863753  561453 kubeconfig.go:125] found "multinode-119623" server: "https://192.168.39.191:8443"
	I1101 09:48:36.863800  561453 api_server.go:166] Checking apiserver status ...
	I1101 09:48:36.863843  561453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:48:36.886005  561453 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W1101 09:48:36.899171  561453 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:48:36.899241  561453 ssh_runner.go:195] Run: ls
	I1101 09:48:36.904542  561453 api_server.go:253] Checking apiserver healthz at https://192.168.39.191:8443/healthz ...
	I1101 09:48:36.909416  561453 api_server.go:279] https://192.168.39.191:8443/healthz returned 200:
	ok
	I1101 09:48:36.909446  561453 status.go:463] multinode-119623 apiserver status = Running (err=<nil>)
	I1101 09:48:36.909457  561453 status.go:176] multinode-119623 status: &{Name:multinode-119623 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:48:36.909477  561453 status.go:174] checking status of multinode-119623-m02 ...
	I1101 09:48:36.911187  561453 status.go:371] multinode-119623-m02 host status = "Running" (err=<nil>)
	I1101 09:48:36.911208  561453 host.go:66] Checking if "multinode-119623-m02" exists ...
	I1101 09:48:36.914372  561453 main.go:143] libmachine: domain multinode-119623-m02 has defined MAC address 52:54:00:34:34:8a in network mk-multinode-119623
	I1101 09:48:36.914788  561453 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:34:8a", ip: ""} in network mk-multinode-119623: {Iface:virbr1 ExpiryTime:2025-11-01 10:47:12 +0000 UTC Type:0 Mac:52:54:00:34:34:8a Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-119623-m02 Clientid:01:52:54:00:34:34:8a}
	I1101 09:48:36.914813  561453 main.go:143] libmachine: domain multinode-119623-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:34:34:8a in network mk-multinode-119623
	I1101 09:48:36.914947  561453 host.go:66] Checking if "multinode-119623-m02" exists ...
	I1101 09:48:36.915171  561453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:48:36.917606  561453 main.go:143] libmachine: domain multinode-119623-m02 has defined MAC address 52:54:00:34:34:8a in network mk-multinode-119623
	I1101 09:48:36.918247  561453 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:34:8a", ip: ""} in network mk-multinode-119623: {Iface:virbr1 ExpiryTime:2025-11-01 10:47:12 +0000 UTC Type:0 Mac:52:54:00:34:34:8a Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-119623-m02 Clientid:01:52:54:00:34:34:8a}
	I1101 09:48:36.918278  561453 main.go:143] libmachine: domain multinode-119623-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:34:34:8a in network mk-multinode-119623
	I1101 09:48:36.918501  561453 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21833-530629/.minikube/machines/multinode-119623-m02/id_rsa Username:docker}
	I1101 09:48:37.000383  561453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:48:37.016444  561453 status.go:176] multinode-119623-m02 status: &{Name:multinode-119623-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:48:37.016482  561453 status.go:174] checking status of multinode-119623-m03 ...
	I1101 09:48:37.018085  561453 status.go:371] multinode-119623-m03 host status = "Stopped" (err=<nil>)
	I1101 09:48:37.018109  561453 status.go:384] host is not running, skipping remaining checks
	I1101 09:48:37.018115  561453 status.go:176] multinode-119623-m03 status: &{Name:multinode-119623-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-119623 node start m03 -v=5 --alsologtostderr: (40.047789002s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119623
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-119623
E1101 09:50:56.881866  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:51:35.411061  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-119623: (2m34.348797301s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119623 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119623 --wait=true -v=5 --alsologtostderr: (2m38.48859471s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119623
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-119623 node delete m03: (2.203220544s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (178.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 stop
E1101 09:55:56.881292  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:56:18.478665  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:56:35.411374  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-119623 stop: (2m58.077220823s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119623 status: exit status 7 (67.568913ms)

                                                
                                                
-- stdout --
	multinode-119623
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-119623-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr: exit status 7 (63.691885ms)

                                                
                                                
-- stdout --
	multinode-119623
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-119623-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:57:31.427657  563898 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:57:31.427757  563898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:31.427765  563898 out.go:374] Setting ErrFile to fd 2...
	I1101 09:57:31.427769  563898 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:57:31.428016  563898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 09:57:31.428180  563898 out.go:368] Setting JSON to false
	I1101 09:57:31.428207  563898 mustload.go:66] Loading cluster: multinode-119623
	I1101 09:57:31.428335  563898 notify.go:221] Checking for updates...
	I1101 09:57:31.428577  563898 config.go:182] Loaded profile config "multinode-119623": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:57:31.428592  563898 status.go:174] checking status of multinode-119623 ...
	I1101 09:57:31.430679  563898 status.go:371] multinode-119623 host status = "Stopped" (err=<nil>)
	I1101 09:57:31.430695  563898 status.go:384] host is not running, skipping remaining checks
	I1101 09:57:31.430700  563898 status.go:176] multinode-119623 status: &{Name:multinode-119623 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:57:31.430729  563898 status.go:174] checking status of multinode-119623-m02 ...
	I1101 09:57:31.432077  563898 status.go:371] multinode-119623-m02 host status = "Stopped" (err=<nil>)
	I1101 09:57:31.432091  563898 status.go:384] host is not running, skipping remaining checks
	I1101 09:57:31.432096  563898 status.go:176] multinode-119623-m02 status: &{Name:multinode-119623-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (178.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (96.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119623 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 09:58:59.950437  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119623 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.460573208s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-119623 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (96.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-119623
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119623-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-119623-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (97.188053ms)

                                                
                                                
-- stdout --
	* [multinode-119623-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-119623-m02' is duplicated with machine name 'multinode-119623-m02' in profile 'multinode-119623'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-119623-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-119623-m03 --driver=kvm2  --container-runtime=crio: (41.01098151s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-119623
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-119623: exit status 80 (219.431346ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-119623 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-119623-m03 already exists in multinode-119623-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-119623-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-293009 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-293009 --memory=3072 --driver=kvm2  --container-runtime=crio: (39.28471042s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293009 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-293009 -n scheduled-stop-293009
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:03:07.530354  534515 retry.go:31] will retry after 82.386µs: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.531519  534515 retry.go:31] will retry after 84.131µs: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.532670  534515 retry.go:31] will retry after 332.876µs: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.533806  534515 retry.go:31] will retry after 241.931µs: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.534942  534515 retry.go:31] will retry after 577.699µs: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.536082  534515 retry.go:31] will retry after 1.13906ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.538312  534515 retry.go:31] will retry after 1.550282ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.540499  534515 retry.go:31] will retry after 1.260616ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.542704  534515 retry.go:31] will retry after 1.899089ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.544943  534515 retry.go:31] will retry after 4.391203ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.550243  534515 retry.go:31] will retry after 7.964972ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.558796  534515 retry.go:31] will retry after 12.554804ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.572141  534515 retry.go:31] will retry after 8.01547ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.580421  534515 retry.go:31] will retry after 27.678906ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
I1101 10:03:07.608676  534515 retry.go:31] will retry after 42.783678ms: open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/scheduled-stop-293009/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293009 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293009 -n scheduled-stop-293009
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293009
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-293009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-293009
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-293009: exit status 7 (63.373528ms)

                                                
                                                
-- stdout --
	scheduled-stop-293009
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293009 -n scheduled-stop-293009
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-293009 -n scheduled-stop-293009: exit status 7 (66.003217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-293009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-293009
--- PASS: TestScheduledStopUnix (110.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (166.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4282478991 start -p running-upgrade-663857 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4282478991 start -p running-upgrade-663857 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m46.145062756s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-663857 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-663857 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.966215701s)
helpers_test.go:175: Cleaning up "running-upgrade-663857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-663857
--- PASS: TestRunningBinaryUpgrade (166.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (273.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.540195154s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-353156
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-353156: (2.31044883s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-353156 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-353156 status --format={{.Host}}: exit status 7 (94.61637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.302429427s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-353156 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.8145ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-353156] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-353156
	    minikube start -p kubernetes-upgrade-353156 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3531562 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-353156 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1101 10:06:35.404238  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-353156 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m32.43830043s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-353156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-353156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-353156: (1.051089093s)
--- PASS: TestKubernetesUpgrade (273.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (100.491176ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-336039] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-336039 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-336039 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.478070649s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-336039 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.748770253s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-336039 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-336039 status -o json: exit status 2 (249.972089ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-336039","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-336039
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-336039: (1.036775741s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (109.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3585668295 start -p stopped-upgrade-313492 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1101 10:05:56.881338  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3585668295 start -p stopped-upgrade-313492 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m0.560568981s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3585668295 -p stopped-upgrade-313492 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3585668295 -p stopped-upgrade-313492 stop: (1.851549159s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-313492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-313492 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.114506828s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (109.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-336039 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.203111438s)
--- PASS: TestNoKubernetes/serial/Start (48.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-336039 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-336039 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.350245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-336039
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-336039: (1.481512968s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-336039 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-336039 --driver=kvm2  --container-runtime=crio: (59.84711785s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-313492
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-313492: (1.011869019s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-242892 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-242892 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (171.447214ms)

                                                
                                                
-- stdout --
	* [false-242892] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:07:47.706600  570219 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:07:47.706748  570219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:47.706764  570219 out.go:374] Setting ErrFile to fd 2...
	I1101 10:07:47.706772  570219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:07:47.707148  570219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-530629/.minikube/bin
	I1101 10:07:47.707930  570219 out.go:368] Setting JSON to false
	I1101 10:07:47.709409  570219 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67790,"bootTime":1761923878,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:07:47.709501  570219 start.go:143] virtualization: kvm guest
	I1101 10:07:47.712766  570219 out.go:179] * [false-242892] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:07:47.714577  570219 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 10:07:47.714576  570219 notify.go:221] Checking for updates...
	I1101 10:07:47.717239  570219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:07:47.718357  570219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-530629/kubeconfig
	I1101 10:07:47.719662  570219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-530629/.minikube
	I1101 10:07:47.720920  570219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:07:47.722178  570219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:07:47.724161  570219 config.go:182] Loaded profile config "NoKubernetes-336039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:07:47.724335  570219 config.go:182] Loaded profile config "force-systemd-env-940638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:07:47.724443  570219 config.go:182] Loaded profile config "kubernetes-upgrade-353156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:07:47.724597  570219 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:07:47.777524  570219 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:07:47.778859  570219 start.go:309] selected driver: kvm2
	I1101 10:07:47.778883  570219 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:07:47.778940  570219 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:07:47.781753  570219 out.go:203] 
	W1101 10:07:47.783027  570219 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:07:47.784301  570219 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-242892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.4:8443
name: kubernetes-upgrade-353156
contexts:
- context:
cluster: kubernetes-upgrade-353156
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-353156
name: kubernetes-upgrade-353156
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-353156
user:
client-certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.crt
client-key: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-242892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-242892"

                                                
                                                
----------------------- debugLogs end: false-242892 [took: 5.386283509s] --------------------------------
helpers_test.go:175: Cleaning up "false-242892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-242892
--- PASS: TestNetworkPlugins/group/false (5.73s)

                                                
                                    
x
+
TestISOImage/Setup (29.64s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-930796 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-930796 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.64255745s)
--- PASS: TestISOImage/Setup (29.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-336039 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-336039 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.055727ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Start (101.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-533709 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-533709 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m41.499713572s)
--- PASS: TestPause/serial/Start (101.50s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (90.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m30.006828215s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (90.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m4.853710505s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-972522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:10:56.881523  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-972522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m14.663095623s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [72a77140-cd15-447c-8850-b6998dfc0079] Pending
helpers_test.go:352: "busybox" [72a77140-cd15-447c-8850-b6998dfc0079] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [72a77140-cd15-447c-8850-b6998dfc0079] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005225287s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-468183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-080837 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c09b9b6e-1468-4f87-b2f9-134bb1208ff9] Pending
helpers_test.go:352: "busybox" [c09b9b6e-1468-4f87-b2f9-134bb1208ff9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c09b9b6e-1468-4f87-b2f9-134bb1208ff9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004494284s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-080837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-468183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-468183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.06842451s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-468183 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (75.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-468183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-468183 --alsologtostderr -v=3: (1m15.343474816s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (75.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-080837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-080837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067189463s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-080837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-080837 --alsologtostderr -v=3
E1101 10:11:35.403522  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-080837 --alsologtostderr -v=3: (1m28.233415268s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-972522 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0aaeb99d-01a0-4869-8c8d-e696b34c1c28] Pending
helpers_test.go:352: "busybox" [0aaeb99d-01a0-4869-8c8d-e696b34c1c28] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0aaeb99d-01a0-4869-8c8d-e696b34c1c28] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004625172s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-972522 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-972522 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-972522 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (86.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-972522 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-972522 --alsologtostderr -v=3: (1m26.758873875s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (86.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468183 -n embed-certs-468183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468183 -n embed-certs-468183: exit status 7 (63.425297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-468183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:12:58.481336  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-468183 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.284236719s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-468183 -n embed-certs-468183
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-080837 -n old-k8s-version-080837
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-080837 -n old-k8s-version-080837: exit status 7 (87.854975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-080837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (87.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-080837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m27.551338433s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-080837 -n old-k8s-version-080837
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (87.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-096521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-096521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m40.583550347s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvgp6" [aa873179-7feb-41ea-8385-ac03ddb9def2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvgp6" [aa873179-7feb-41ea-8385-ac03ddb9def2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.006327139s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvgp6" [aa873179-7feb-41ea-8385-ac03ddb9def2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004759533s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-468183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-972522 -n no-preload-972522
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-972522 -n no-preload-972522: exit status 7 (77.742033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-972522 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (63.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-972522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-972522 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.344269201s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-972522 -n no-preload-972522
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (63.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-468183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-468183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-468183 --alsologtostderr -v=1: (1.208443558s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468183 -n embed-certs-468183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468183 -n embed-certs-468183: exit status 2 (247.492516ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-468183 -n embed-certs-468183
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-468183 -n embed-certs-468183: exit status 2 (243.805061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-468183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-468183 --alsologtostderr -v=1: (1.032176029s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-468183 -n embed-certs-468183
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-468183 -n embed-certs-468183
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-696801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-696801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.014966744s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsd45" [23052fff-a691-425e-8090-c1eeddccbacc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsd45" [23052fff-a691-425e-8090-c1eeddccbacc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004293323s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nsd45" [23052fff-a691-425e-8090-c1eeddccbacc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00402319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-080837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-080837 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-080837 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-080837 --alsologtostderr -v=1: (1.100580243s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-080837 -n old-k8s-version-080837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-080837 -n old-k8s-version-080837: exit status 2 (255.138413ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-080837 -n old-k8s-version-080837
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-080837 -n old-k8s-version-080837: exit status 2 (247.803229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-080837 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-080837 --alsologtostderr -v=1: (1.001529066s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-080837 -n old-k8s-version-080837
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-080837 -n old-k8s-version-080837
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zdl65" [fbd24ddc-ff24-45d8-bf22-a0c05fa030fe] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zdl65" [fbd24ddc-ff24-45d8-bf22-a0c05fa030fe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.00353203s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (52.962085824s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-096521 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a986bfdf-df43-4f77-9503-23ed6c6d6842] Pending
helpers_test.go:352: "busybox" [a986bfdf-df43-4f77-9503-23ed6c6d6842] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a986bfdf-df43-4f77-9503-23ed6c6d6842] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.008162953s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-096521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-696801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-696801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012671591s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-696801 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-696801 --alsologtostderr -v=3: (10.353534684s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zdl65" [fbd24ddc-ff24-45d8-bf22-a0c05fa030fe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004528548s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-972522 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-096521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-096521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.015393305s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-096521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (86.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-096521 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-096521 --alsologtostderr -v=3: (1m26.749543274s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (86.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-972522 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-972522 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-972522 -n no-preload-972522
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-972522 -n no-preload-972522: exit status 2 (259.215229ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-972522 -n no-preload-972522
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-972522 -n no-preload-972522: exit status 2 (269.868218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-972522 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-972522 -n no-preload-972522
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-972522 -n no-preload-972522
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696801 -n newest-cni-696801
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696801 -n newest-cni-696801: exit status 7 (67.820102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-696801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-696801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-696801 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (41.408019226s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-696801 -n newest-cni-696801
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1101 10:15:39.952355  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.02629641s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-242892 "pgrep -a kubelet"
I1101 10:15:43.818869  534515 config.go:182] Loaded profile config "auto-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pbt66" [a60706f1-00c3-4dde-a25c-e4a012f2d25c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pbt66" [a60706f1-00c3-4dde-a25c-e4a012f2d25c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006913113s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-696801 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-696801 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-696801 --alsologtostderr -v=1: (1.114120524s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696801 -n newest-cni-696801
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696801 -n newest-cni-696801: exit status 2 (381.659007ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-696801 -n newest-cni-696801
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-696801 -n newest-cni-696801: exit status 2 (375.794526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-696801 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-696801 --alsologtostderr -v=1: (1.249089406s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-696801 -n newest-cni-696801
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-696801 -n newest-cni-696801
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1101 10:15:56.881989  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/functional-854568/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.77948102s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1101 10:16:22.788197  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:22.795091  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:22.806536  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:22.828050  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:22.869668  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:22.951781  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:23.113770  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:23.435734  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:24.077209  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:25.360622  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:27.922029  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m44.278568367s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521: exit status 7 (92.966183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-096521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-096521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:16:33.044035  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:16:35.403913  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/addons-994396/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-096521 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.585145124s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-m75vr" [9e37ff6d-e382-4a75-85c0-c09df1d214b8] Running
E1101 10:16:43.286480  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005901131s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-242892 "pgrep -a kubelet"
I1101 10:16:46.439689  534515 config.go:182] Loaded profile config "flannel-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wgq9z" [896bbc2c-9b0f-47cf-9c13-974e12ab165d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wgq9z" [896bbc2c-9b0f-47cf-9c13-974e12ab165d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004687168s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m13.28507775s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-242892 "pgrep -a kubelet"
I1101 10:17:21.787862  534515 config.go:182] Loaded profile config "enable-default-cni-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h6dvn" [3559c9c7-1abe-4c9e-bdcf-f0813981ec36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:17:26.452580  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/no-preload-972522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-h6dvn" [3559c9c7-1abe-4c9e-bdcf-f0813981ec36] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004879831s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zkj86" [1cd8d7ba-cb4d-4a04-a51c-707a48324f7a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zkj86" [1cd8d7ba-cb4d-4a04-a51c-707a48324f7a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.006649728s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zkj86" [1cd8d7ba-cb4d-4a04-a51c-707a48324f7a] Running
E1101 10:17:44.730615  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005646429s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-096521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-096521 image list --format=json
E1101 10:17:46.933976  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/no-preload-972522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-096521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-096521 --alsologtostderr -v=1: (1.882132583s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521: exit status 2 (239.402667ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521: exit status 2 (229.413072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-096521 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-096521 -n default-k8s-diff-port-096521
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.45600847s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (96.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-242892 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m36.738713672s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (96.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-242892 "pgrep -a kubelet"
I1101 10:17:54.923951  534515 config.go:182] Loaded profile config "bridge-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xwknr" [346cb34d-e2d2-4ced-84c4-eb281e5985fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xwknr" [346cb34d-e2d2-4ced-84c4-eb281e5985fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.006041173s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /data | grep /data"
E1101 10:18:27.896436  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/no-preload-972522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-930796 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hb69t" [6a04e38c-f45d-4412-a55f-c098ce4d731d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-hb69t" [6a04e38c-f45d-4412-a55f-c098ce4d731d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004134107s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-242892 "pgrep -a kubelet"
I1101 10:18:36.926312  534515 config.go:182] Loaded profile config "calico-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (30.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nlhqh" [3a679b5e-5d26-4a75-91f3-79b3bde194fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nlhqh" [3a679b5e-5d26-4a75-91f3-79b3bde194fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 30.004572831s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (30.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-l8cq7" [08c3a406-86cb-4cfc-98ba-bb53af7eaec1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00675583s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-242892 "pgrep -a kubelet"
I1101 10:19:00.294325  534515 config.go:182] Loaded profile config "kindnet-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhjt4" [c45943d5-d57e-497d-ae94-2d2abe5cc34d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jhjt4" [c45943d5-d57e-497d-ae94-2d2abe5cc34d] Running
E1101 10:19:06.652562  534515 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/old-k8s-version-080837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004262474s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-242892 "pgrep -a kubelet"
I1101 10:19:29.827717  534515 config.go:182] Loaded profile config "custom-flannel-242892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-242892 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ljjtt" [ea8e4ecd-99c5-4707-87fe-274ee8b2b946] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ljjtt" [ea8e4ecd-99c5-4707-87fe-274ee8b2b946] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003757681s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-242892 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-242892 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    

Test skip (40/343)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.22
275 TestNetworkPlugins/group/kubenet 4.15
283 TestNetworkPlugins/group/cilium 4.65
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-994396 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-414104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-414104
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-242892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.4:8443
name: kubernetes-upgrade-353156
contexts:
- context:
cluster: kubernetes-upgrade-353156
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-353156
name: kubernetes-upgrade-353156
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-353156
user:
client-certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.crt
client-key: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-242892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-242892"

                                                
                                                
----------------------- debugLogs end: kubenet-242892 [took: 3.933838524s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-242892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-242892
--- SKIP: TestNetworkPlugins/group/kubenet (4.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-242892 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-242892" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21833-530629/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.4:8443
name: kubernetes-upgrade-353156
contexts:
- context:
cluster: kubernetes-upgrade-353156
extensions:
- extension:
last-update: Sat, 01 Nov 2025 10:06:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-353156
name: kubernetes-upgrade-353156
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-353156
user:
client-certificate: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.crt
client-key: /home/jenkins/minikube-integration/21833-530629/.minikube/profiles/kubernetes-upgrade-353156/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-242892

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-242892" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-242892"

                                                
                                                
----------------------- debugLogs end: cilium-242892 [took: 4.463127543s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-242892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-242892
--- SKIP: TestNetworkPlugins/group/cilium (4.65s)

                                                
                                    
Copied to clipboard